Test Report: Docker_Linux 22047

                    
                      4655c6aa5049635fb4cb98fc0f74f66a1c57dbdb:2025-12-06:42658
                    
                

Test fail (15/434)

x
+
TestAddons/parallel/Ingress (491.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-397143 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-397143 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-397143 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [cdfe45e6-04be-4041-b2cb-1d4867877943] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-397143 -n addons-397143
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-12-06 09:11:15.047043225 +0000 UTC m=+660.753682873
addons_test.go:252: (dbg) Run:  kubectl --context addons-397143 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-397143 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-397143/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:03:14 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.33
IPs:
IP:  10.244.0.33
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vnp5t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-vnp5t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m1s                 default-scheduler  Successfully assigned default/nginx to addons-397143
Normal   Pulling    5m12s (x5 over 8m)   kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m12s (x5 over 8m)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m12s (x5 over 8m)   kubelet            Error: ErrImagePull
Normal   BackOff    2m49s (x21 over 8m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m49s (x21 over 8m)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-397143 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-397143 logs nginx -n default: exit status 1 (70.899344ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-397143 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-397143
helpers_test.go:243: (dbg) docker inspect addons-397143:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744",
	        "Created": "2025-12-06T09:00:49.046421689Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 561175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:00:49.075737708Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/hostname",
	        "HostsPath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/hosts",
	        "LogPath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744-json.log",
	        "Name": "/addons-397143",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-397143:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-397143",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744",
	                "LowerDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-397143",
	                "Source": "/var/lib/docker/volumes/addons-397143/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-397143",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-397143",
	                "name.minikube.sigs.k8s.io": "addons-397143",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e0557e0dc0aacd78c66943f8b7a28f43c126b8e8ba76dbd40e3e5ed98c50aee",
	            "SandboxKey": "/var/run/docker/netns/7e0557e0dc0a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-397143": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "386d3d8bab56f83cae3e94e29d5cbd553f358549165499a0dbdbff3aa9e4e9df",
	                    "EndpointID": "7f265e22bb57aaffdf4d1df099c9fb2eaad4abd4cafb73f039ca10ebc1b7430d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "36:69:eb:9c:9f:3d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-397143",
	                        "b250bd2a2eea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-397143 -n addons-397143
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 logs -n 25
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-716523                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-716523   │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ start   │ --download-only -p download-docker-129039 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-129039 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ delete  │ -p download-docker-129039                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-129039 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ start   │ --download-only -p binary-mirror-001335 --alsologtostderr --binary-mirror http://127.0.0.1:37221 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-001335   │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ delete  │ -p binary-mirror-001335                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-001335   │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ addons  │ enable dashboard -p addons-397143                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ addons  │ disable dashboard -p addons-397143                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ start   │ -p addons-397143 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ addons-397143 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ addons-397143 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ enable headlamp -p addons-397143 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ addons-397143 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ ip      │ addons-397143 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-397143                                                                                                                                                                                                                                                                                                                                                                                             │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:08 UTC │ 06 Dec 25 09:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:00:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:00:26.539491  560524 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:00:26.539777  560524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:26.539788  560524 out.go:374] Setting ErrFile to fd 2...
	I1206 09:00:26.539792  560524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:26.540013  560524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:00:26.540583  560524 out.go:368] Setting JSON to false
	I1206 09:00:26.542040  560524 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6174,"bootTime":1765005453,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:00:26.542253  560524 start.go:143] virtualization: kvm guest
	I1206 09:00:26.544089  560524 out.go:179] * [addons-397143] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:00:26.545247  560524 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:00:26.545274  560524 notify.go:221] Checking for updates...
	I1206 09:00:26.547219  560524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:00:26.548230  560524 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:00:26.549341  560524 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:00:26.550381  560524 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:00:26.551371  560524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:00:26.552563  560524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:00:26.576082  560524 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:00:26.576217  560524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:26.630549  560524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:00:26.620509466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:26.630694  560524 docker.go:319] overlay module found
	I1206 09:00:26.632414  560524 out.go:179] * Using the docker driver based on user configuration
	I1206 09:00:26.633456  560524 start.go:309] selected driver: docker
	I1206 09:00:26.633470  560524 start.go:927] validating driver "docker" against <nil>
	I1206 09:00:26.633481  560524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:00:26.634062  560524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:26.688536  560524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:00:26.679077232 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:26.688712  560524 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:00:26.688996  560524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:00:26.690630  560524 out.go:179] * Using Docker driver with root privileges
	I1206 09:00:26.691691  560524 cni.go:84] Creating CNI manager for ""
	I1206 09:00:26.691772  560524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:00:26.691786  560524 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:00:26.691871  560524 start.go:353] cluster config:
	{Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1206 09:00:26.693227  560524 out.go:179] * Starting "addons-397143" primary control-plane node in "addons-397143" cluster
	I1206 09:00:26.694307  560524 cache.go:134] Beginning downloading kic base image for docker with docker
	I1206 09:00:26.695252  560524 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:00:26.696193  560524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1206 09:00:26.696238  560524 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1206 09:00:26.696252  560524 cache.go:65] Caching tarball of preloaded images
	I1206 09:00:26.696281  560524 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:00:26.696350  560524 preload.go:238] Found /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1206 09:00:26.696365  560524 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1206 09:00:26.696790  560524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/config.json ...
	I1206 09:00:26.696818  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/config.json: {Name:mk7ac323ce41973b51bf22f2ed203b69de8fdcb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:26.712669  560524 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:00:26.712803  560524 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 09:00:26.712830  560524 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 09:00:26.712838  560524 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 09:00:26.712846  560524 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 09:00:26.712852  560524 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 09:00:38.813411  560524 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 09:00:38.813453  560524 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:00:38.813498  560524 start.go:360] acquireMachinesLock for addons-397143: {Name:mkc730cbc98457a5fee329fd5ee4344cb9be9fdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:00:38.813592  560524 start.go:364] duration metric: took 71.446µs to acquireMachinesLock for "addons-397143"
	I1206 09:00:38.813627  560524 start.go:93] Provisioning new machine with config: &{Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 09:00:38.813700  560524 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:00:38.816005  560524 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 09:00:38.816271  560524 start.go:159] libmachine.API.Create for "addons-397143" (driver="docker")
	I1206 09:00:38.816306  560524 client.go:173] LocalClient.Create starting
	I1206 09:00:38.816424  560524 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem
	I1206 09:00:38.882402  560524 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem
	I1206 09:00:38.934646  560524 cli_runner.go:164] Run: docker network inspect addons-397143 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:00:38.952678  560524 cli_runner.go:211] docker network inspect addons-397143 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:00:38.952763  560524 network_create.go:284] running [docker network inspect addons-397143] to gather additional debugging logs...
	I1206 09:00:38.952787  560524 cli_runner.go:164] Run: docker network inspect addons-397143
	W1206 09:00:38.968739  560524 cli_runner.go:211] docker network inspect addons-397143 returned with exit code 1
	I1206 09:00:38.968771  560524 network_create.go:287] error running [docker network inspect addons-397143]: docker network inspect addons-397143: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-397143 not found
	I1206 09:00:38.968801  560524 network_create.go:289] output of [docker network inspect addons-397143]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-397143 not found
	
	** /stderr **
	I1206 09:00:38.968961  560524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:00:38.985258  560524 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c28870}
	I1206 09:00:38.985313  560524 network_create.go:124] attempt to create docker network addons-397143 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 09:00:38.985369  560524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-397143 addons-397143
	I1206 09:00:39.030379  560524 network_create.go:108] docker network addons-397143 192.168.49.0/24 created
	I1206 09:00:39.030414  560524 kic.go:121] calculated static IP "192.168.49.2" for the "addons-397143" container
	I1206 09:00:39.030485  560524 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:00:39.046412  560524 cli_runner.go:164] Run: docker volume create addons-397143 --label name.minikube.sigs.k8s.io=addons-397143 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:00:39.064516  560524 oci.go:103] Successfully created a docker volume addons-397143
	I1206 09:00:39.064597  560524 cli_runner.go:164] Run: docker run --rm --name addons-397143-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-397143 --entrypoint /usr/bin/test -v addons-397143:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:00:45.752617  560524 cli_runner.go:217] Completed: docker run --rm --name addons-397143-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-397143 --entrypoint /usr/bin/test -v addons-397143:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (6.687967968s)
	I1206 09:00:45.752646  560524 oci.go:107] Successfully prepared a docker volume addons-397143
	I1206 09:00:45.752701  560524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1206 09:00:45.752713  560524 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:00:45.752763  560524 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-397143:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:00:48.976506  560524 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-397143:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.223690811s)
	I1206 09:00:48.976540  560524 kic.go:203] duration metric: took 3.223821962s to extract preloaded images to volume ...
	W1206 09:00:48.976631  560524 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:00:48.976670  560524 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:00:48.976721  560524 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:00:49.029963  560524 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-397143 --name addons-397143 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-397143 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-397143 --network addons-397143 --ip 192.168.49.2 --volume addons-397143:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:00:49.281100  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Running}}
	I1206 09:00:49.298871  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:00:49.317772  560524 cli_runner.go:164] Run: docker exec addons-397143 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:00:49.367708  560524 oci.go:144] the created container "addons-397143" has a running status.
	I1206 09:00:49.367738  560524 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa...
	I1206 09:00:49.419747  560524 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:00:49.448700  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:00:49.468327  560524 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:00:49.468354  560524 kic_runner.go:114] Args: [docker exec --privileged addons-397143 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:00:49.508783  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:00:49.529663  560524 machine.go:94] provisionDockerMachine start ...
	I1206 09:00:49.529777  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:49.546707  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:49.547082  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:49.547102  560524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:00:49.547768  560524 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37356->127.0.0.1:33171: read: connection reset by peer
	I1206 09:00:52.674906  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-397143
	
	I1206 09:00:52.674955  560524 ubuntu.go:182] provisioning hostname "addons-397143"
	I1206 09:00:52.675030  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:52.692096  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:52.692362  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:52.692379  560524 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-397143 && echo "addons-397143" | sudo tee /etc/hostname
	I1206 09:00:52.826459  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-397143
	
	I1206 09:00:52.826534  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:52.844114  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:52.844328  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:52.844351  560524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-397143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-397143/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-397143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:00:52.970393  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:00:52.970429  560524 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-555179/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-555179/.minikube}
	I1206 09:00:52.970460  560524 ubuntu.go:190] setting up certificates
	I1206 09:00:52.970479  560524 provision.go:84] configureAuth start
	I1206 09:00:52.970557  560524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-397143
	I1206 09:00:52.988476  560524 provision.go:143] copyHostCerts
	I1206 09:00:52.988546  560524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-555179/.minikube/ca.pem (1082 bytes)
	I1206 09:00:52.988681  560524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-555179/.minikube/cert.pem (1123 bytes)
	I1206 09:00:52.988754  560524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-555179/.minikube/key.pem (1675 bytes)
	I1206 09:00:52.988806  560524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-555179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca-key.pem org=jenkins.addons-397143 san=[127.0.0.1 192.168.49.2 addons-397143 localhost minikube]
	I1206 09:00:53.151941  560524 provision.go:177] copyRemoteCerts
	I1206 09:00:53.152005  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:00:53.152056  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.170236  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:53.264625  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:00:53.283805  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:00:53.301549  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:00:53.318851  560524 provision.go:87] duration metric: took 348.353095ms to configureAuth
	I1206 09:00:53.318878  560524 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:00:53.319055  560524 config.go:182] Loaded profile config "addons-397143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:00:53.319113  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.336749  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:53.337018  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:53.337033  560524 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1206 09:00:53.463781  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1206 09:00:53.463805  560524 ubuntu.go:71] root file system type: overlay
	I1206 09:00:53.463959  560524 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1206 09:00:53.464029  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.482200  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:53.482430  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:53.482491  560524 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1206 09:00:53.619715  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1206 09:00:53.619797  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.637563  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:53.637787  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:53.637804  560524 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1206 09:00:54.697022  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-06 09:00:53.617749558 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1206 09:00:54.697060  560524 machine.go:97] duration metric: took 5.167358872s to provisionDockerMachine
	I1206 09:00:54.697075  560524 client.go:176] duration metric: took 15.880760097s to LocalClient.Create
	I1206 09:00:54.697102  560524 start.go:167] duration metric: took 15.880829581s to libmachine.API.Create "addons-397143"
	I1206 09:00:54.697117  560524 start.go:293] postStartSetup for "addons-397143" (driver="docker")
	I1206 09:00:54.697131  560524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:00:54.697213  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:00:54.697256  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.716272  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:54.810724  560524 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:00:54.814408  560524 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:00:54.814442  560524 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:00:54.814455  560524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-555179/.minikube/addons for local assets ...
	I1206 09:00:54.814507  560524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-555179/.minikube/files for local assets ...
	I1206 09:00:54.814528  560524 start.go:296] duration metric: took 117.404395ms for postStartSetup
	I1206 09:00:54.814805  560524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-397143
	I1206 09:00:54.832349  560524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/config.json ...
	I1206 09:00:54.832656  560524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:00:54.832709  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.849433  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:54.939127  560524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:00:54.943599  560524 start.go:128] duration metric: took 16.129885011s to createHost
	I1206 09:00:54.943624  560524 start.go:83] releasing machines lock for "addons-397143", held for 16.130019742s
	I1206 09:00:54.943680  560524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-397143
	I1206 09:00:54.960708  560524 ssh_runner.go:195] Run: cat /version.json
	I1206 09:00:54.960762  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.960792  560524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:00:54.960884  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.980101  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:54.980101  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:55.124454  560524 ssh_runner.go:195] Run: systemctl --version
	I1206 09:00:55.130888  560524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:00:55.135362  560524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:00:55.135435  560524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:00:55.159478  560524 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:00:55.159508  560524 start.go:496] detecting cgroup driver to use...
	I1206 09:00:55.159543  560524 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:00:55.159656  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:00:55.173403  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:00:55.183079  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:00:55.191439  560524 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1206 09:00:55.191496  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1206 09:00:55.199653  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:00:55.207597  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:00:55.215442  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:00:55.223778  560524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:00:55.231575  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:00:55.240123  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:00:55.248602  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:00:55.257201  560524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:00:55.264456  560524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:00:55.271938  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:55.351207  560524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:00:55.428761  560524 start.go:496] detecting cgroup driver to use...
	I1206 09:00:55.428816  560524 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:00:55.428869  560524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1206 09:00:55.442519  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:00:55.454239  560524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:00:55.474133  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:00:55.485504  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:00:55.497254  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:00:55.510568  560524 ssh_runner.go:195] Run: which cri-dockerd
	I1206 09:00:55.513872  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1206 09:00:55.522377  560524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1206 09:00:55.534018  560524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1206 09:00:55.611076  560524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1206 09:00:55.689715  560524 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I1206 09:00:55.689858  560524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1206 09:00:55.702617  560524 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1206 09:00:55.713979  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:55.789867  560524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1206 09:00:56.458167  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:00:56.470945  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1206 09:00:56.483771  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1206 09:00:56.495886  560524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1206 09:00:56.577548  560524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1206 09:00:56.657320  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:56.736816  560524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1206 09:00:56.761163  560524 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1206 09:00:56.772893  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:56.853472  560524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1206 09:00:56.923834  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1206 09:00:56.936608  560524 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1206 09:00:56.936678  560524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1206 09:00:56.940481  560524 start.go:564] Will wait 60s for crictl version
	I1206 09:00:56.940531  560524 ssh_runner.go:195] Run: which crictl
	I1206 09:00:56.943885  560524 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:00:56.970128  560524 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1206 09:00:56.970199  560524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 09:00:56.996082  560524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 09:00:57.022774  560524 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1206 09:00:57.022859  560524 cli_runner.go:164] Run: docker network inspect addons-397143 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:00:57.040461  560524 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 09:00:57.044415  560524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:00:57.054462  560524 kubeadm.go:884] updating cluster {Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:00:57.054587  560524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1206 09:00:57.054646  560524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 09:00:57.074482  560524 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1206 09:00:57.074506  560524 docker.go:621] Images already preloaded, skipping extraction
	I1206 09:00:57.074570  560524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 09:00:57.094407  560524 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1206 09:00:57.094437  560524 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:00:57.094450  560524 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 docker true true} ...
	I1206 09:00:57.094575  560524 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-397143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:00:57.094629  560524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1206 09:00:57.145051  560524 cni.go:84] Creating CNI manager for ""
	I1206 09:00:57.145094  560524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:00:57.145109  560524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:00:57.145129  560524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-397143 NodeName:addons-397143 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:00:57.145267  560524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-397143"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:00:57.145331  560524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:00:57.153792  560524 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:00:57.153860  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:00:57.162367  560524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1206 09:00:57.175361  560524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:00:57.187548  560524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1206 09:00:57.199842  560524 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:00:57.203449  560524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:00:57.213429  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:57.291868  560524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:00:57.316380  560524 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143 for IP: 192.168.49.2
	I1206 09:00:57.316406  560524 certs.go:195] generating shared ca certs ...
	I1206 09:00:57.316429  560524 certs.go:227] acquiring lock for ca certs: {Name:mk4bb3cf92982779c7f527f324bcd90239618827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.316570  560524 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key
	I1206 09:00:57.392934  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt ...
	I1206 09:00:57.392978  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt: {Name:mk80900300c2b34fed0332b66effe4ef5b1d4e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.393213  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key ...
	I1206 09:00:57.393233  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key: {Name:mk38293f1c37569e15cfd07f213c8dd4cda75e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.393363  560524 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key
	I1206 09:00:57.471463  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.crt ...
	I1206 09:00:57.471506  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.crt: {Name:mk2189b62446dd0057916c9318855bef74f0d257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.471728  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key ...
	I1206 09:00:57.471747  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key: {Name:mk5b6b93f1431d75fe362b6511741d95c3d131cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.471864  560524 certs.go:257] generating profile certs ...
	I1206 09:00:57.471964  560524 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.key
	I1206 09:00:57.471994  560524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt with IP's: []
	I1206 09:00:57.556379  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt ...
	I1206 09:00:57.556422  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: {Name:mk6e330ed6ad1a27c7376513d660fb6406e8f9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.556638  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.key ...
	I1206 09:00:57.556658  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.key: {Name:mk20bdf5986eaa27503f234813f0acfbaedc8d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.556793  560524 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b
	I1206 09:00:57.556829  560524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 09:00:57.622666  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b ...
	I1206 09:00:57.622705  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b: {Name:mkd4c56c0e4e0fa4043dd82252aa03b82255adae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.622938  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b ...
	I1206 09:00:57.622958  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b: {Name:mkc16c3c0d7af7681db53e117f4d6a14268aa281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.623094  560524 certs.go:382] copying /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b -> /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt
	I1206 09:00:57.623240  560524 certs.go:386] copying /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b -> /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key
	I1206 09:00:57.623359  560524 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key
	I1206 09:00:57.623392  560524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt with IP's: []
	I1206 09:00:57.694161  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt ...
	I1206 09:00:57.694203  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt: {Name:mkafbbf80c073fb0f531ef2ffbc0313f93adfefe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.694431  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key ...
	I1206 09:00:57.694452  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key: {Name:mk7fd2bc77471f6c57074616f679fa918e5d2069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.694701  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 09:00:57.694790  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:00:57.694842  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:00:57.694884  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/key.pem (1675 bytes)
	I1206 09:00:57.695518  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:00:57.714334  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:00:57.731487  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:00:57.748277  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:00:57.765291  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:00:57.783015  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:00:57.800117  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:00:57.817518  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:00:57.834259  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:00:57.854210  560524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:00:57.866593  560524 ssh_runner.go:195] Run: openssl version
	I1206 09:00:57.872823  560524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.880032  560524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:00:57.889591  560524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.893248  560524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.893304  560524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.926811  560524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:00:57.934667  560524 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:00:57.942065  560524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:00:57.945683  560524 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:00:57.945740  560524 kubeadm.go:401] StartCluster: {Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:00:57.945846  560524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1206 09:00:57.964685  560524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:00:57.972818  560524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:00:57.980556  560524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:00:57.980615  560524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:00:57.988018  560524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:00:57.988038  560524 kubeadm.go:158] found existing configuration files:
	
	I1206 09:00:57.988097  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:00:57.995437  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:00:57.995496  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:00:58.002581  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:00:58.009941  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:00:58.009995  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:00:58.017973  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:00:58.025553  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:00:58.025604  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:00:58.033693  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:00:58.042247  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:00:58.042310  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:00:58.050501  560524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:00:58.112588  560524 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:00:58.170554  560524 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:01:07.786900  560524 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:01:07.787025  560524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:01:07.787117  560524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:01:07.787165  560524 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:01:07.787205  560524 kubeadm.go:319] OS: Linux
	I1206 09:01:07.787243  560524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:01:07.787281  560524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:01:07.787327  560524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:01:07.787414  560524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:01:07.787518  560524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:01:07.787589  560524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:01:07.787658  560524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:01:07.787736  560524 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:01:07.787835  560524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:01:07.787987  560524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:01:07.788100  560524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:01:07.788168  560524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:01:07.789727  560524 out.go:252]   - Generating certificates and keys ...
	I1206 09:01:07.789795  560524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:01:07.789855  560524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:01:07.789925  560524 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:01:07.789977  560524 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:01:07.790043  560524 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:01:07.790089  560524 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:01:07.790133  560524 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:01:07.790232  560524 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-397143 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:01:07.790279  560524 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:01:07.790377  560524 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-397143 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:01:07.790431  560524 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:01:07.790481  560524 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:01:07.790517  560524 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:01:07.790603  560524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:01:07.790687  560524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:01:07.790762  560524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:01:07.790805  560524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:01:07.790858  560524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:01:07.790927  560524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:01:07.791012  560524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:01:07.791077  560524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:01:07.792313  560524 out.go:252]   - Booting up control plane ...
	I1206 09:01:07.792387  560524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:01:07.792456  560524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:01:07.792509  560524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:01:07.792591  560524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:01:07.792664  560524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:01:07.792753  560524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:01:07.792819  560524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:01:07.792851  560524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:01:07.792977  560524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:01:07.793072  560524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:01:07.793125  560524 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001252402s
	I1206 09:01:07.793207  560524 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:01:07.793308  560524 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 09:01:07.793426  560524 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:01:07.793526  560524 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:01:07.793610  560524 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004446485s
	I1206 09:01:07.793666  560524 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.765836939s
	I1206 09:01:07.793722  560524 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501152133s
	I1206 09:01:07.793815  560524 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:01:07.793937  560524 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:01:07.793988  560524 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:01:07.794184  560524 kubeadm.go:319] [mark-control-plane] Marking the node addons-397143 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:01:07.794239  560524 kubeadm.go:319] [bootstrap-token] Using token: ns4ow5.ed430geiuztapn6x
	I1206 09:01:07.795505  560524 out.go:252]   - Configuring RBAC rules ...
	I1206 09:01:07.795590  560524 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:01:07.795664  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:01:07.795780  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:01:07.795905  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:01:07.796030  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:01:07.796112  560524 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:01:07.796212  560524 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:01:07.796259  560524 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:01:07.796314  560524 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:01:07.796319  560524 kubeadm.go:319] 
	I1206 09:01:07.796370  560524 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:01:07.796381  560524 kubeadm.go:319] 
	I1206 09:01:07.796464  560524 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:01:07.796473  560524 kubeadm.go:319] 
	I1206 09:01:07.796507  560524 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:01:07.796557  560524 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:01:07.796604  560524 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:01:07.796611  560524 kubeadm.go:319] 
	I1206 09:01:07.796653  560524 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:01:07.796662  560524 kubeadm.go:319] 
	I1206 09:01:07.796702  560524 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:01:07.796709  560524 kubeadm.go:319] 
	I1206 09:01:07.796760  560524 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:01:07.796819  560524 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:01:07.796881  560524 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:01:07.796887  560524 kubeadm.go:319] 
	I1206 09:01:07.796970  560524 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:01:07.797056  560524 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:01:07.797061  560524 kubeadm.go:319] 
	I1206 09:01:07.797134  560524 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ns4ow5.ed430geiuztapn6x \
	I1206 09:01:07.797233  560524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f716ac9d2865d69eed0a26d1db8a24df2dd71b0fd1ee780c7774713123bac1e1 \
	I1206 09:01:07.797252  560524 kubeadm.go:319] 	--control-plane 
	I1206 09:01:07.797257  560524 kubeadm.go:319] 
	I1206 09:01:07.797329  560524 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:01:07.797335  560524 kubeadm.go:319] 
	I1206 09:01:07.797398  560524 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ns4ow5.ed430geiuztapn6x \
	I1206 09:01:07.797494  560524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f716ac9d2865d69eed0a26d1db8a24df2dd71b0fd1ee780c7774713123bac1e1 
	I1206 09:01:07.797508  560524 cni.go:84] Creating CNI manager for ""
	I1206 09:01:07.797525  560524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:01:07.798698  560524 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:01:07.799613  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:01:07.807969  560524 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:01:07.820476  560524 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:01:07.820551  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:07.820607  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-397143 minikube.k8s.io/updated_at=2025_12_06T09_01_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-397143 minikube.k8s.io/primary=true
	I1206 09:01:07.831980  560524 ops.go:34] apiserver oom_adj: -16
	I1206 09:01:07.900018  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:08.400477  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:08.900336  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:09.400663  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:09.900321  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:10.400371  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:10.900680  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:11.400334  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:11.900841  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:12.401057  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:12.476765  560524 kubeadm.go:1114] duration metric: took 4.656262539s to wait for elevateKubeSystemPrivileges
	I1206 09:01:12.476809  560524 kubeadm.go:403] duration metric: took 14.53107513s to StartCluster
	I1206 09:01:12.476830  560524 settings.go:142] acquiring lock: {Name:mk6c714838f6ea9636d4320a94ca67badc317f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:01:12.476994  560524 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:01:12.477464  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/kubeconfig: {Name:mk8a2d601ffa4d6c208ceb157eb91d604defe102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:01:12.477679  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:01:12.477708  560524 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 09:01:12.477778  560524 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:01:12.477906  560524 addons.go:70] Setting ingress-dns=true in profile "addons-397143"
	I1206 09:01:12.477935  560524 addons.go:70] Setting volcano=true in profile "addons-397143"
	I1206 09:01:12.477942  560524 addons.go:239] Setting addon ingress-dns=true in "addons-397143"
	I1206 09:01:12.477955  560524 addons.go:239] Setting addon volcano=true in "addons-397143"
	I1206 09:01:12.477953  560524 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-397143"
	I1206 09:01:12.477953  560524 addons.go:70] Setting registry-creds=true in profile "addons-397143"
	I1206 09:01:12.477986  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.477987  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.477994  560524 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-397143"
	I1206 09:01:12.478007  560524 addons.go:70] Setting inspektor-gadget=true in profile "addons-397143"
	I1206 09:01:12.478021  560524 addons.go:239] Setting addon inspektor-gadget=true in "addons-397143"
	I1206 09:01:12.478052  560524 config.go:182] Loaded profile config "addons-397143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:01:12.478066  560524 addons.go:70] Setting metrics-server=true in profile "addons-397143"
	I1206 09:01:12.478086  560524 addons.go:239] Setting addon metrics-server=true in "addons-397143"
	I1206 09:01:12.478101  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.478057  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.478372  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478527  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478548  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478595  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478635  560524 addons.go:70] Setting volumesnapshots=true in profile "addons-397143"
	I1206 09:01:12.478653  560524 addons.go:239] Setting addon volumesnapshots=true in "addons-397143"
	I1206 09:01:12.478688  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.478704  560524 addons.go:70] Setting storage-provisioner=true in profile "addons-397143"
	I1206 09:01:12.478728  560524 addons.go:239] Setting addon storage-provisioner=true in "addons-397143"
	I1206 09:01:12.478756  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.479209  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.479238  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.479693  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.477902  560524 addons.go:70] Setting yakd=true in profile "addons-397143"
	I1206 09:01:12.480168  560524 addons.go:239] Setting addon yakd=true in "addons-397143"
	I1206 09:01:12.480200  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.480319  560524 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-397143"
	I1206 09:01:12.480380  560524 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-397143"
	I1206 09:01:12.480414  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.481104  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.481327  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.484375  560524 addons.go:70] Setting ingress=true in profile "addons-397143"
	I1206 09:01:12.484404  560524 addons.go:239] Setting addon ingress=true in "addons-397143"
	I1206 09:01:12.480535  560524 out.go:179] * Verifying Kubernetes components...
	I1206 09:01:12.484450  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.477997  560524 addons.go:239] Setting addon registry-creds=true in "addons-397143"
	I1206 09:01:12.480552  560524 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-397143"
	I1206 09:01:12.480564  560524 addons.go:70] Setting cloud-spanner=true in profile "addons-397143"
	I1206 09:01:12.480578  560524 addons.go:70] Setting gcp-auth=true in profile "addons-397143"
	I1206 09:01:12.480587  560524 addons.go:70] Setting default-storageclass=true in profile "addons-397143"
	I1206 09:01:12.480599  560524 addons.go:70] Setting registry=true in profile "addons-397143"
	I1206 09:01:12.480609  560524 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-397143"
	I1206 09:01:12.484964  560524 addons.go:239] Setting addon cloud-spanner=true in "addons-397143"
	I1206 09:01:12.485001  560524 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-397143"
	I1206 09:01:12.485019  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.485096  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.485476  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.485853  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.486122  560524 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-397143"
	I1206 09:01:12.486166  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.486170  560524 addons.go:239] Setting addon registry=true in "addons-397143"
	I1206 09:01:12.486205  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.486614  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.486640  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.486864  560524 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-397143"
	I1206 09:01:12.487236  560524 mustload.go:66] Loading cluster: addons-397143
	I1206 09:01:12.484943  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.489054  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:01:12.485004  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.496717  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.497082  560524 config.go:182] Loaded profile config "addons-397143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:01:12.497363  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.497816  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.533422  560524 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-397143"
	I1206 09:01:12.533841  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.535176  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.537207  560524 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:01:12.538688  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:01:12.538709  560524 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:01:12.538768  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.542673  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:01:12.543796  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:01:12.544109  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:01:12.545082  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:01:12.546161  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:01:12.547288  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:01:12.547471  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:01:12.547498  560524 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:01:12.547569  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.549604  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:01:12.550739  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:01:12.552447  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:01:12.553669  560524 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:01:12.554907  560524 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:01:12.554963  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:01:12.555062  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.553692  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:01:12.555164  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:01:12.555202  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.564727  560524 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:01:12.566161  560524 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:01:12.566885  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:01:12.566907  560524 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:01:12.566996  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.567582  560524 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:01:12.567601  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:01:12.567656  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.579617  560524 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:01:12.580901  560524 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:01:12.581341  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.582123  560524 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:01:12.582141  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:01:12.582216  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.586492  560524 addons.go:239] Setting addon default-storageclass=true in "addons-397143"
	I1206 09:01:12.586536  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.587059  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.588005  560524 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1206 09:01:12.589252  560524 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:01:12.589864  560524 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1206 09:01:12.590790  560524 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:01:12.590810  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:01:12.590869  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.594460  560524 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1206 09:01:12.598014  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:01:12.598549  560524 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:01:12.598951  560524 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:01:12.598977  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1206 09:01:12.599120  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.600079  560524 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:01:12.600099  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:01:12.600173  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.601042  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:01:12.602127  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:01:12.603901  560524 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:01:12.605107  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:01:12.605234  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.617423  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.619045  560524 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:01:12.620657  560524 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:01:12.620678  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:01:12.620737  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.620977  560524 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:01:12.621967  560524 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:01:12.621985  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:01:12.622039  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.632592  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.635115  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.646054  560524 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:01:12.648845  560524 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:01:12.648869  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:01:12.648987  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.653887  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.658336  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.659661  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:01:12.660189  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.662375  560524 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:01:12.664310  560524 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:01:12.665404  560524 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:01:12.665420  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:01:12.665488  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.674399  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.675039  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.676290  560524 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:01:12.676309  560524 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:01:12.676358  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.682439  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.687743  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.692129  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.707714  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.711064  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	W1206 09:01:12.711163  560524 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:01:12.711203  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.711223  560524 retry.go:31] will retry after 158.051231ms: ssh: handshake failed: EOF
	I1206 09:01:12.718598  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.730956  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	W1206 09:01:12.735503  560524 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:01:12.735534  560524 retry.go:31] will retry after 278.0498ms: ssh: handshake failed: EOF
	I1206 09:01:12.748705  560524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:01:12.808924  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:01:12.808952  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:01:12.832044  560524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:01:12.832077  560524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:01:12.833212  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:01:12.833235  560524 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:01:12.854358  560524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:01:12.854380  560524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:01:12.862635  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:01:12.874871  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:01:12.874896  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:01:12.876000  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:01:12.876022  560524 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:01:12.894541  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:01:12.894575  560524 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:01:12.896838  560524 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:01:12.896859  560524 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:01:12.912158  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:01:12.912561  560524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:01:12.912580  560524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:01:12.920060  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:01:12.922464  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:01:12.922489  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:01:12.931369  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:01:12.931574  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:01:12.931884  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:01:12.934740  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:01:12.935185  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:01:12.941343  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:01:12.945896  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:01:12.960492  560524 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:01:12.960514  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:01:12.960988  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:01:12.961009  560524 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:01:12.975564  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:01:12.975665  560524 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:01:12.977614  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:01:12.977634  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:01:13.021373  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:01:13.041637  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:01:13.041664  560524 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:01:13.043958  560524 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:01:13.044171  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:01:13.059562  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:01:13.059704  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:01:13.101737  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:01:13.114966  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:01:13.115006  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:01:13.121288  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:01:13.131456  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:01:13.131478  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:01:13.201875  560524 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 09:01:13.203058  560524 node_ready.go:35] waiting up to 6m0s for node "addons-397143" to be "Ready" ...
	I1206 09:01:13.205984  560524 node_ready.go:49] node "addons-397143" is "Ready"
	I1206 09:01:13.206067  560524 node_ready.go:38] duration metric: took 2.852565ms for node "addons-397143" to be "Ready" ...
	I1206 09:01:13.206099  560524 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:01:13.206169  560524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:01:13.219718  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:01:13.219797  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:01:13.256531  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:01:13.320712  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:01:13.366142  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:01:13.366172  560524 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:01:13.421754  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:01:13.421779  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:01:13.607566  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:01:13.607599  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:01:13.677730  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:01:13.677757  560524 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:01:13.707204  560524 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-397143" context rescaled to 1 replicas
	I1206 09:01:13.737431  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:01:14.115178  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.202814857s)
	I1206 09:01:14.115249  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.195164084s)
	I1206 09:01:14.115304  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.183911038s)
	I1206 09:01:14.115428  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.252699742s)
	I1206 09:01:14.115667  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.184070234s)
	I1206 09:01:14.115730  560524 addons.go:495] Verifying addon metrics-server=true in "addons-397143"
	I1206 09:01:14.866616  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.931783086s)
	I1206 09:01:14.866666  560524 addons.go:495] Verifying addon ingress=true in "addons-397143"
	I1206 09:01:14.867029  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.931634954s)
	I1206 09:01:14.867140  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.935229961s)
	I1206 09:01:14.867266  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.925846261s)
	I1206 09:01:14.867962  560524 out.go:179] * Verifying ingress addon...
	I1206 09:01:14.870173  560524 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:01:14.874506  560524 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:01:14.874535  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:15.374191  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:15.877901  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:15.914260  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.96829715s)
	I1206 09:01:15.914340  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.892942257s)
	I1206 09:01:15.914359  560524 addons.go:495] Verifying addon registry=true in "addons-397143"
	I1206 09:01:15.914636  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.812866152s)
	I1206 09:01:15.914774  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.793451496s)
	W1206 09:01:15.914820  560524 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:01:15.914839  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.658210309s)
	I1206 09:01:15.914851  560524 retry.go:31] will retry after 235.577069ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:01:15.914788  560524 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.708588684s)
	I1206 09:01:15.914941  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.594176794s)
	I1206 09:01:15.914885  560524 api_server.go:72] duration metric: took 3.43714198s to wait for apiserver process to appear ...
	I1206 09:01:15.915156  560524 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:01:15.915182  560524 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 09:01:15.915235  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.1777014s)
	I1206 09:01:15.915255  560524 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-397143"
	I1206 09:01:15.917085  560524 out.go:179] * Verifying registry addon...
	I1206 09:01:15.917096  560524 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-397143 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:01:15.917094  560524 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:01:15.919365  560524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:01:15.920749  560524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:01:15.923542  560524 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 09:01:15.925084  560524 api_server.go:141] control plane version: v1.34.2
	I1206 09:01:15.925111  560524 api_server.go:131] duration metric: took 9.945321ms to wait for apiserver health ...
	I1206 09:01:15.925122  560524 system_pods.go:43] waiting for kube-system pods to appear ...
	W1206 09:01:15.928076  560524 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1206 09:01:15.978527  560524 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:01:15.978678  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:15.983004  560524 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:01:15.983030  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:15.983620  560524 system_pods.go:59] 20 kube-system pods found
	I1206 09:01:15.983673  560524 system_pods.go:61] "amd-gpu-device-plugin-7v6qd" [e7bf466a-43c8-41cb-9860-7d52e5aff252] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:01:15.983688  560524 system_pods.go:61] "coredns-66bc5c9577-fpvgk" [41ee2021-f73e-404a-b1a3-00dfc267d583] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:15.983709  560524 system_pods.go:61] "coredns-66bc5c9577-x6rcj" [751d4d7f-a16d-4565-bff1-92503cdd9a58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:15.983741  560524 system_pods.go:61] "csi-hostpath-attacher-0" [5d7370f7-00bc-4ee4-a1e5-aa9eeb7fb030] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:01:15.983757  560524 system_pods.go:61] "csi-hostpath-resizer-0" [a5cda1a5-ed95-4107-8e42-afd26d252c12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:01:15.983772  560524 system_pods.go:61] "csi-hostpathplugin-w827s" [4cfe1385-8cf1-4411-bd96-1bb35c41e598] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:01:15.983784  560524 system_pods.go:61] "etcd-addons-397143" [61458027-2120-4ba5-ae6f-72c25bbb629a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:01:15.983801  560524 system_pods.go:61] "kube-apiserver-addons-397143" [04461dfb-ba60-400b-bc1b-eb5cecb4e116] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:01:15.983846  560524 system_pods.go:61] "kube-controller-manager-addons-397143" [a6b3b762-cc38-492a-8592-f0622969a194] Running
	I1206 09:01:15.983856  560524 system_pods.go:61] "kube-ingress-dns-minikube" [33c3e281-3395-41a9-a0d7-5abebf00c814] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:01:15.983863  560524 system_pods.go:61] "kube-proxy-6fcf7" [31f6821e-c37c-4b3d-8b96-240fb218fc3f] Running
	I1206 09:01:15.983874  560524 system_pods.go:61] "kube-scheduler-addons-397143" [e6e2d9e1-c8bf-4735-b568-8de4d6e7a1c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:01:15.983891  560524 system_pods.go:61] "metrics-server-85b7d694d7-kf4gp" [028fa0be-ce61-4a1a-88bc-1cb6d15e3e69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:01:15.983929  560524 system_pods.go:61] "nvidia-device-plugin-daemonset-znf8f" [17e7dbb3-481b-40e2-95e0-1b3aeb866481] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:01:15.983958  560524 system_pods.go:61] "registry-6b586f9694-zpjdp" [548aae88-07b1-44c4-be9a-0e70e03f5eb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:01:15.983973  560524 system_pods.go:61] "registry-creds-764b6fb674-2jnnm" [801b6007-fb7e-483b-8257-f46427397644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:01:15.983981  560524 system_pods.go:61] "registry-proxy-rxwrl" [29e4f265-043b-4c75-862c-a02beb7c6e1e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:01:15.984004  560524 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9bpfc" [f513e41b-59c9-49c0-b134-9dde97dcaa6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:15.984013  560524 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kq9t8" [4a77f10b-ff26-482a-9c59-65660620545d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:15.984019  560524 system_pods.go:61] "storage-provisioner" [16d5c01b-4db2-4009-b381-13397dfde1d0] Running
	I1206 09:01:15.984028  560524 system_pods.go:74] duration metric: took 58.899165ms to wait for pod list to return data ...
	I1206 09:01:15.984039  560524 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:01:15.987167  560524 default_sa.go:45] found service account: "default"
	I1206 09:01:15.987193  560524 default_sa.go:55] duration metric: took 3.148522ms for default service account to be created ...
	I1206 09:01:15.987205  560524 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:01:16.082329  560524 system_pods.go:86] 20 kube-system pods found
	I1206 09:01:16.082374  560524 system_pods.go:89] "amd-gpu-device-plugin-7v6qd" [e7bf466a-43c8-41cb-9860-7d52e5aff252] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:01:16.082385  560524 system_pods.go:89] "coredns-66bc5c9577-fpvgk" [41ee2021-f73e-404a-b1a3-00dfc267d583] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:16.082403  560524 system_pods.go:89] "coredns-66bc5c9577-x6rcj" [751d4d7f-a16d-4565-bff1-92503cdd9a58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:16.082416  560524 system_pods.go:89] "csi-hostpath-attacher-0" [5d7370f7-00bc-4ee4-a1e5-aa9eeb7fb030] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:01:16.082424  560524 system_pods.go:89] "csi-hostpath-resizer-0" [a5cda1a5-ed95-4107-8e42-afd26d252c12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:01:16.082434  560524 system_pods.go:89] "csi-hostpathplugin-w827s" [4cfe1385-8cf1-4411-bd96-1bb35c41e598] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:01:16.082451  560524 system_pods.go:89] "etcd-addons-397143" [61458027-2120-4ba5-ae6f-72c25bbb629a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:01:16.082461  560524 system_pods.go:89] "kube-apiserver-addons-397143" [04461dfb-ba60-400b-bc1b-eb5cecb4e116] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:01:16.082471  560524 system_pods.go:89] "kube-controller-manager-addons-397143" [a6b3b762-cc38-492a-8592-f0622969a194] Running
	I1206 09:01:16.082480  560524 system_pods.go:89] "kube-ingress-dns-minikube" [33c3e281-3395-41a9-a0d7-5abebf00c814] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:01:16.082491  560524 system_pods.go:89] "kube-proxy-6fcf7" [31f6821e-c37c-4b3d-8b96-240fb218fc3f] Running
	I1206 09:01:16.082500  560524 system_pods.go:89] "kube-scheduler-addons-397143" [e6e2d9e1-c8bf-4735-b568-8de4d6e7a1c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:01:16.082508  560524 system_pods.go:89] "metrics-server-85b7d694d7-kf4gp" [028fa0be-ce61-4a1a-88bc-1cb6d15e3e69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:01:16.082517  560524 system_pods.go:89] "nvidia-device-plugin-daemonset-znf8f" [17e7dbb3-481b-40e2-95e0-1b3aeb866481] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:01:16.082533  560524 system_pods.go:89] "registry-6b586f9694-zpjdp" [548aae88-07b1-44c4-be9a-0e70e03f5eb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:01:16.082547  560524 system_pods.go:89] "registry-creds-764b6fb674-2jnnm" [801b6007-fb7e-483b-8257-f46427397644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:01:16.082560  560524 system_pods.go:89] "registry-proxy-rxwrl" [29e4f265-043b-4c75-862c-a02beb7c6e1e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:01:16.082574  560524 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9bpfc" [f513e41b-59c9-49c0-b134-9dde97dcaa6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:16.082587  560524 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kq9t8" [4a77f10b-ff26-482a-9c59-65660620545d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:16.082593  560524 system_pods.go:89] "storage-provisioner" [16d5c01b-4db2-4009-b381-13397dfde1d0] Running
	I1206 09:01:16.082603  560524 system_pods.go:126] duration metric: took 95.390308ms to wait for k8s-apps to be running ...
	I1206 09:01:16.082613  560524 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:01:16.082672  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:01:16.101439  560524 system_svc.go:56] duration metric: took 18.813639ms WaitForService to wait for kubelet
	I1206 09:01:16.101478  560524 kubeadm.go:587] duration metric: took 3.623733904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:01:16.101503  560524 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:01:16.104673  560524 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:01:16.104710  560524 node_conditions.go:123] node cpu capacity is 8
	I1206 09:01:16.104732  560524 node_conditions.go:105] duration metric: took 3.221231ms to run NodePressure ...
	I1206 09:01:16.104748  560524 start.go:242] waiting for startup goroutines ...
	I1206 09:01:16.150736  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:01:16.374355  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:16.474630  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:16.474649  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:16.874595  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:16.924250  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:16.925463  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:17.374139  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:17.474704  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:17.474934  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:17.874625  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:17.936232  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:17.936489  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:18.373789  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:18.474746  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:18.474967  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:18.748694  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.597908981s)
	I1206 09:01:18.874592  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:18.975661  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:18.975839  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:19.375294  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:19.475391  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:19.475635  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:19.874372  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:19.923637  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:19.923636  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:19.988427  560524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:01:19.988506  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:20.013006  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:20.121268  560524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:01:20.138006  560524 addons.go:239] Setting addon gcp-auth=true in "addons-397143"
	I1206 09:01:20.138077  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:20.138517  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:20.160717  560524 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:01:20.160780  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:20.182867  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:20.283476  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:01:20.284929  560524 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:01:20.285954  560524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:01:20.285977  560524 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:01:20.301724  560524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:01:20.301749  560524 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:01:20.317310  560524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:01:20.317334  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:01:20.333547  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:01:20.374360  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:20.423398  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:20.423565  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:20.710055  560524 addons.go:495] Verifying addon gcp-auth=true in "addons-397143"
	I1206 09:01:20.711695  560524 out.go:179] * Verifying gcp-auth addon...
	I1206 09:01:20.714378  560524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:01:20.716846  560524 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:01:20.716870  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:20.874932  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:20.922982  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:20.923216  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:21.418604  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:21.419016  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:21.422465  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:21.423162  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:21.718363  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:21.873843  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:21.974662  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:21.975038  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:22.217716  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:22.373626  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:22.422901  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:22.424209  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:22.717880  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:22.873899  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:22.923166  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:22.923520  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:23.217411  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:23.373732  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:23.423353  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:23.424386  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:23.717719  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:23.874090  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:23.923022  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:23.923506  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:24.217799  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:24.374269  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:24.423436  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:24.423768  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:24.718556  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:24.873245  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:24.924519  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:24.924590  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:25.217781  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:25.374437  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:25.423356  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:25.423667  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:25.717553  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:25.874423  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:25.975702  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:25.976154  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:26.218184  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:26.374083  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:26.423742  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:26.423942  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:26.718904  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:26.874412  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:26.935940  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:26.975472  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:27.217948  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:27.374749  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:27.423181  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:27.423239  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:27.718475  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:27.873518  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:27.923490  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:27.923750  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:28.217627  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:28.374122  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:28.474332  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:28.474409  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:28.718288  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:28.872993  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:28.922734  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:28.923229  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:29.217373  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:29.374166  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:29.423444  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:29.423469  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:29.718531  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:29.873852  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:29.923138  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:29.924356  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:30.218103  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:30.374940  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:30.423797  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:30.424005  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:30.717852  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:30.954652  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:30.954714  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:30.954830  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:31.218067  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:31.374239  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:31.423063  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:31.423647  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:31.717891  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:31.873874  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:31.923089  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:31.923262  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:32.217444  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:32.373976  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:32.423332  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:32.423553  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:32.754594  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:32.873785  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:32.922615  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:32.924405  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:33.218120  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:33.374475  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:33.423564  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:33.423840  560524 kapi.go:107] duration metric: took 17.503090524s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:01:33.718190  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:33.874004  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:33.923121  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:34.218238  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:34.374948  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:34.423160  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:34.718033  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:34.892803  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:34.922649  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:35.218385  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:35.373474  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:35.423582  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:35.717603  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:35.873680  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:35.922943  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:36.218306  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:36.373850  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:36.422772  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:36.718229  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:36.873306  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:36.923493  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:37.217894  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:37.374306  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:37.423630  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:37.718152  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:37.891722  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:37.922114  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:38.217785  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:38.373452  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:38.451417  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:38.717515  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:38.873851  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:38.922896  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:39.264105  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:39.374593  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:39.423074  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:39.734042  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:39.874139  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:39.923126  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:40.217751  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:40.373925  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:40.423235  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:40.718315  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:40.884158  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:40.923194  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:41.218704  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:41.376208  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:41.424075  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:41.718414  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:41.873424  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:41.923874  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:42.218110  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:42.374173  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:42.423034  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:42.718620  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:42.873966  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:42.922939  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:43.218392  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:43.373738  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:43.423233  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:43.717372  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:43.873286  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:43.923708  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:44.217500  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:44.374023  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:44.422962  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:44.718430  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:44.874076  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:44.923115  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:45.259763  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:45.374049  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:45.423206  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:45.718080  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:45.876131  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:45.975142  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:46.218326  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:46.373340  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:46.423016  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:46.718549  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:46.873625  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:46.922667  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:47.218097  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:47.373618  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:47.474875  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:47.717687  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:47.874217  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:47.923578  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:48.217765  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:48.432005  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:48.432332  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:48.719867  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:48.936736  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:48.936829  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:49.246385  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:49.373707  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:49.422957  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:49.718435  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:49.874292  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:49.975172  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:50.218244  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:50.373729  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:50.422704  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:50.718147  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:50.873930  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:50.922984  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:51.218045  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:51.374625  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:51.422702  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:51.718047  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:51.874354  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:51.923423  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:52.235203  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:52.373950  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:52.459151  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:52.718770  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:52.873740  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:52.923207  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:53.218709  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:53.374471  560524 kapi.go:107] duration metric: took 38.504297557s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:01:53.423630  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:53.717673  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:53.922844  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:54.217858  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:54.423200  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:54.718258  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:54.923580  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:55.217751  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:55.423617  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:55.717654  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:55.923389  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:56.218089  560524 kapi.go:107] duration metric: took 35.503708394s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:01:56.219717  560524 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-397143 cluster.
	I1206 09:01:56.221089  560524 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:01:56.222339  560524 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:01:56.424290  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:56.923312  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:57.429386  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:57.923233  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:58.423286  560524 kapi.go:107] duration metric: took 42.503917285s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:01:58.424968  560524 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, cloud-spanner, storage-provisioner, inspektor-gadget, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 09:01:58.426064  560524 addons.go:530] duration metric: took 45.948293408s for enable addons: enabled=[registry-creds nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server cloud-spanner storage-provisioner inspektor-gadget volcano yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 09:01:58.426109  560524 start.go:247] waiting for cluster config update ...
	I1206 09:01:58.426129  560524 start.go:256] writing updated cluster config ...
	I1206 09:01:58.426393  560524 ssh_runner.go:195] Run: rm -f paused
	I1206 09:01:58.430366  560524 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:01:58.433776  560524 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fpvgk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.437927  560524 pod_ready.go:94] pod "coredns-66bc5c9577-fpvgk" is "Ready"
	I1206 09:01:58.437946  560524 pod_ready.go:86] duration metric: took 4.149963ms for pod "coredns-66bc5c9577-fpvgk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.439684  560524 pod_ready.go:83] waiting for pod "etcd-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.443125  560524 pod_ready.go:94] pod "etcd-addons-397143" is "Ready"
	I1206 09:01:58.443142  560524 pod_ready.go:86] duration metric: took 3.44013ms for pod "etcd-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.444851  560524 pod_ready.go:83] waiting for pod "kube-apiserver-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.448179  560524 pod_ready.go:94] pod "kube-apiserver-addons-397143" is "Ready"
	I1206 09:01:58.448197  560524 pod_ready.go:86] duration metric: took 3.321895ms for pod "kube-apiserver-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.450048  560524 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.834021  560524 pod_ready.go:94] pod "kube-controller-manager-addons-397143" is "Ready"
	I1206 09:01:58.834055  560524 pod_ready.go:86] duration metric: took 383.980324ms for pod "kube-controller-manager-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:59.033884  560524 pod_ready.go:83] waiting for pod "kube-proxy-6fcf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:59.434807  560524 pod_ready.go:94] pod "kube-proxy-6fcf7" is "Ready"
	I1206 09:01:59.434840  560524 pod_ready.go:86] duration metric: took 400.931468ms for pod "kube-proxy-6fcf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:59.634261  560524 pod_ready.go:83] waiting for pod "kube-scheduler-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:02:00.034546  560524 pod_ready.go:94] pod "kube-scheduler-addons-397143" is "Ready"
	I1206 09:02:00.034578  560524 pod_ready.go:86] duration metric: took 400.292578ms for pod "kube-scheduler-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:02:00.034593  560524 pod_ready.go:40] duration metric: took 1.604194688s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:02:00.080362  560524 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:02:00.083042  560524 out.go:179] * Done! kubectl is now configured to use "addons-397143" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 06 09:05:32 addons-397143 dockerd[1058]: time="2025-12-06T09:05:32.082224586Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=071e8ddda600 ep=k8s_POD_helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7_local-path-storage_b7c7ce96-4280-453f-9d10-2c364f13cce8_0 net=none nid=529a01a6e280
	Dec 06 09:05:32 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:05:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/530f9e22a8e0b07eac58ae3ac1b2927b0066568afff9e1b8fc98ad222981c521/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:05:32 addons-397143 dockerd[1058]: time="2025-12-06T09:05:32.166796282Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:05:32 addons-397143 dockerd[1058]: time="2025-12-06T09:05:32.198949472Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:05:47 addons-397143 dockerd[1058]: time="2025-12-06T09:05:47.041562993Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:05:47 addons-397143 dockerd[1058]: time="2025-12-06T09:05:47.130242013Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:05:47 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:05:47Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Dec 06 09:06:03 addons-397143 dockerd[1058]: time="2025-12-06T09:06:03.112886725Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:06:10 addons-397143 dockerd[1058]: time="2025-12-06T09:06:10.036441321Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:06:10 addons-397143 dockerd[1058]: time="2025-12-06T09:06:10.064268895Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:07:01 addons-397143 dockerd[1058]: time="2025-12-06T09:07:01.054825653Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:07:01 addons-397143 dockerd[1058]: time="2025-12-06T09:07:01.087514170Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:07:32 addons-397143 dockerd[1058]: time="2025-12-06T09:07:32.218592509Z" level=info msg="ignoring event" container=530f9e22a8e0b07eac58ae3ac1b2927b0066568afff9e1b8fc98ad222981c521 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:08:02 addons-397143 dockerd[1058]: time="2025-12-06T09:08:02.508627708Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=79c6ca7d884b ep=k8s_POD_helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7_local-path-storage_8db43445-3765-4849-9a81-d1ec4f306fa2_0 net=none nid=529a01a6e280
	Dec 06 09:08:02 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:08:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41e2be6e26253b7da9000ac1472286f9b5178bf6ce3789c483fc983f21467f6d/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:08:02 addons-397143 dockerd[1058]: time="2025-12-06T09:08:02.592869668Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:08:02 addons-397143 dockerd[1058]: time="2025-12-06T09:08:02.681221861Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:08:02 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:08:02Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Dec 06 09:08:18 addons-397143 dockerd[1058]: time="2025-12-06T09:08:18.038196738Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:08:18 addons-397143 dockerd[1058]: time="2025-12-06T09:08:18.069249951Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:08:23 addons-397143 dockerd[1058]: time="2025-12-06T09:08:23.603411827Z" level=info msg="ignoring event" container=41e2be6e26253b7da9000ac1472286f9b5178bf6ce3789c483fc983f21467f6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:08:48 addons-397143 dockerd[1058]: time="2025-12-06T09:08:48.501087187Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e
	Dec 06 09:08:48 addons-397143 dockerd[1058]: time="2025-12-06T09:08:48.537712824Z" level=info msg="ignoring event" container=2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:08:48 addons-397143 dockerd[1058]: time="2025-12-06T09:08:48.704455237Z" level=info msg="ignoring event" container=e9bbd8b25704181e75b6943342e2ba852482eff7dab3235ac9c0a2042874e587 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:08:51 addons-397143 dockerd[1058]: time="2025-12-06T09:08:51.138468462Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	cf6f4d7de1c33       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   5e3979b5feb75       busybox                                    default
	b59acd257d9ee       registry.k8s.io/ingress-nginx/controller@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d             9 minutes ago       Running             controller                0                   80cc5f6e2300f       ingress-nginx-controller-6c8bf45fb-qd5t5   ingress-nginx
	117e4b540e15b       884bd0ac01c8f                                                                                                                9 minutes ago       Exited              patch                     1                   e1f813548887b       ingress-nginx-admission-patch-lnnt6        ingress-nginx
	eda6476e37832       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   9 minutes ago       Exited              create                    0                   07f39fab88723       ingress-nginx-admission-create-g7njc       ingress-nginx
	f7af93ac9f80e       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         9 minutes ago       Running             minikube-ingress-dns      0                   e7e63b5ad38d4       kube-ingress-dns-minikube                  kube-system
	0f347014fdc71       6e38f40d628db                                                                                                                10 minutes ago      Running             storage-provisioner       0                   b2b59bab762c4       storage-provisioner                        kube-system
	066c6444849e6       52546a367cc9e                                                                                                                10 minutes ago      Running             coredns                   0                   be640513e14ca       coredns-66bc5c9577-fpvgk                   kube-system
	f4c3a7ec7956b       8aa150647e88a                                                                                                                10 minutes ago      Running             kube-proxy                0                   7970b8f8cb298       kube-proxy-6fcf7                           kube-system
	c050fa440776a       a5f569d49a979                                                                                                                10 minutes ago      Running             kube-apiserver            0                   d6f0c21d418ca       kube-apiserver-addons-397143               kube-system
	f1e917c087344       a3e246e9556e9                                                                                                                10 minutes ago      Running             etcd                      0                   f3ea735827e52       etcd-addons-397143                         kube-system
	c79c3ddffef14       01e8bacf0f500                                                                                                                10 minutes ago      Running             kube-controller-manager   0                   974acc705e827       kube-controller-manager-addons-397143      kube-system
	addeaa3548cdc       88320b5498ff2                                                                                                                10 minutes ago      Running             kube-scheduler            0                   ff08c61d5f918       kube-scheduler-addons-397143               kube-system
	
	
	==> controller_ingress [b59acd257d9e] <==
	I1206 09:01:53.750358       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1206 09:01:53.750856       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1206 09:01:53.756904       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1206 09:01:53.757006       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-6c8bf45fb-qd5t5"
	I1206 09:01:53.789789       7 controller.go:228] "Backend successfully reloaded"
	I1206 09:01:53.789876       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1206 09:01:53.790003       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6c8bf45fb-qd5t5", UID:"f88ce9b4-7922-4a77-8d6f-79b4c4429a31", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1206 09:01:53.819283       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-qd5t5" node="addons-397143"
	I1206 09:01:53.827712       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-qd5t5" node="addons-397143"
	W1206 09:03:14.513965       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1206 09:03:14.515196       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1206 09:03:14.518595       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I1206 09:03:14.518816       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"051367f5-3640-4236-a394-0b65adee15af", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1815", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1206 09:03:16.691150       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1206 09:03:16.692468       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1206 09:03:16.744073       7 controller.go:228] "Backend successfully reloaded"
	I1206 09:03:16.744329       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6c8bf45fb-qd5t5", UID:"f88ce9b4-7922-4a77-8d6f-79b4c4429a31", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1206 09:03:20.024051       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:28.019899       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:38.414023       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:41.750037       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:45.081029       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1206 09:03:53.765840       7 status.go:311] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I1206 09:03:53.769629       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"051367f5-3640-4236-a394-0b65adee15af", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2172", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1206 09:03:53.769688       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [066c6444849e] <==
	[INFO] 10.244.0.8:52658 - 42477 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000166099s
	[INFO] 10.244.0.8:47411 - 48208 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000084729s
	[INFO] 10.244.0.8:47411 - 47850 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000108381s
	[INFO] 10.244.0.8:32988 - 11187 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000070128s
	[INFO] 10.244.0.8:32988 - 11414 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000091993s
	[INFO] 10.244.0.8:45533 - 40886 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000072829s
	[INFO] 10.244.0.8:45533 - 40614 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000102097s
	[INFO] 10.244.0.8:52408 - 60382 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132537s
	[INFO] 10.244.0.8:52408 - 60102 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165557s
	[INFO] 10.244.0.27:48995 - 17822 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000343105s
	[INFO] 10.244.0.27:58851 - 10520 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460137s
	[INFO] 10.244.0.27:48846 - 61693 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136292s
	[INFO] 10.244.0.27:48229 - 1407 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159639s
	[INFO] 10.244.0.27:60678 - 33804 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137009s
	[INFO] 10.244.0.27:44355 - 57346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015559s
	[INFO] 10.244.0.27:34301 - 61571 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004710227s
	[INFO] 10.244.0.27:36034 - 2819 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007062006s
	[INFO] 10.244.0.27:42498 - 37577 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005544323s
	[INFO] 10.244.0.27:45026 - 21893 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006596011s
	[INFO] 10.244.0.27:49738 - 57538 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003748503s
	[INFO] 10.244.0.27:52981 - 58245 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005803702s
	[INFO] 10.244.0.27:42342 - 3936 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001832996s
	[INFO] 10.244.0.27:56619 - 208 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002127039s
	[INFO] 10.244.0.32:41418 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000507144s
	[INFO] 10.244.0.32:34882 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000191563s
	
	
	==> describe nodes <==
	Name:               addons-397143
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-397143
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-397143
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_01_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-397143
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:01:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-397143
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:11:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:09:48 +0000   Sat, 06 Dec 2025 09:01:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:09:48 +0000   Sat, 06 Dec 2025 09:01:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:09:48 +0000   Sat, 06 Dec 2025 09:01:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:09:48 +0000   Sat, 06 Dec 2025 09:01:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-397143
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                0d1db992-e563-4087-8925-e25804a95f3c
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-qd5t5    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-fpvgk                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-addons-397143                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-397143                250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-397143       200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-6fcf7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-397143                100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-397143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-397143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-397143 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m   node-controller  Node addons-397143 event: Registered Node addons-397143 in Controller
	  Normal  NodeReady                10m   kubelet          Node addons-397143 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 ee ab 83 85 8a 08 06
	[  +0.768089] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 35 57 c8 5d fa 08 06
	[  +3.986685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a 13 a9 14 7f 58 08 06
	[  +0.848154] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 8a 6f 1a 22 ad 40 08 06
	[  +0.251239] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0e a7 c4 94 ef 5a 08 06
	[  +0.431184] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 41 2d 73 86 ed 08 06
	[  +0.515220] IPv4: martian source 10.244.0.8 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.026943] IPv4: martian source 10.244.0.8 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	
	
	==> etcd [f1e917c08734] <==
	{"level":"warn","ts":"2025-12-06T09:01:04.191774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:04.197980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:04.242893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:16.411510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:16.420778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52146","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:01:18.124593Z","caller":"traceutil/trace.go:172","msg":"trace[1481053207] transaction","detail":"{read_only:false; response_revision:922; number_of_response:1; }","duration":"138.501901ms","start":"2025-12-06T09:01:17.986070Z","end":"2025-12-06T09:01:18.124572Z","steps":["trace[1481053207] 'process raft request'  (duration: 138.358285ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:01:21.417420Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.244323ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041795505965803 > lease_revoke:<id:70cc9af2e4a944f8>","response":"size:29"}
	{"level":"info","ts":"2025-12-06T09:01:21.574870Z","caller":"traceutil/trace.go:172","msg":"trace[692152230] transaction","detail":"{read_only:false; response_revision:990; number_of_response:1; }","duration":"124.890399ms","start":"2025-12-06T09:01:21.449960Z","end":"2025-12-06T09:01:21.574851Z","steps":["trace[692152230] 'process raft request'  (duration: 124.720317ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:01:38.157844Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.27016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:01:38.157937Z","caller":"traceutil/trace.go:172","msg":"trace[380136476] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1074; }","duration":"161.350974ms","start":"2025-12-06T09:01:37.996547Z","end":"2025-12-06T09:01:38.157898Z","steps":["trace[380136476] 'range keys from in-memory index tree'  (duration: 161.20193ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:01:41.694404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.728629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.760808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.787442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.796496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.823391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.835451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.848253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.862528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.871119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.883399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.897318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37114","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:11:03.761353Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2379}
	{"level":"info","ts":"2025-12-06T09:11:03.936327Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2379,"took":"174.261495ms","hash":1298401844,"current-db-size-bytes":10289152,"current-db-size":"10 MB","current-db-size-in-use-bytes":2678784,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-12-06T09:11:03.936377Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1298401844,"revision":2379,"compact-revision":-1}
	
	
	==> kernel <==
	 09:11:16 up  1:53,  0 user,  load average: 1.25, 0.90, 1.62
	Linux addons-397143 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c050fa440776] <==
	W1206 09:02:29.385766       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1206 09:02:29.407678       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1206 09:02:29.725514       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1206 09:02:29.830558       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1206 09:02:49.742611       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51456: use of closed network connection
	E1206 09:02:49.952113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51484: use of closed network connection
	I1206 09:02:59.512313       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.183.137"}
	I1206 09:03:14.516093       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 09:03:14.702841       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.22.8"}
	I1206 09:03:15.657550       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 09:03:37.626555       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.626610       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.641121       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.641162       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.645766       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.645820       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.659460       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.659509       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.676854       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.676948       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1206 09:03:38.642263       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 09:03:38.677627       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1206 09:03:38.696416       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1206 09:03:41.208446       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1206 09:11:04.628282       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c79c3ddffef1] <==
	E1206 09:10:37.610627       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:10:37.611808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:10:38.137460       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:10:38.138535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:10:41.658482       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:10:42.226554       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:10:42.227640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:10:47.037657       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:10:47.038770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:10:51.684801       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:10:51.685902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:10:52.522849       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:10:52.524026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:10:56.659414       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:10:58.406311       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:10:58.407376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:11:02.238610       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:11:02.239674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:11:04.732632       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:11:04.733644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:11:04.819291       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:11:04.820387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:11:11.659766       1 pv_controller.go:1587] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1206 09:11:15.564995       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:11:15.566161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f4c3a7ec7956] <==
	I1206 09:01:13.298363       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:01:13.419167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:01:13.524968       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:01:13.525037       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:01:13.525165       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:01:13.619652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:01:13.619726       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:01:13.642311       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:01:13.653409       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:01:13.653945       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:01:13.668205       1 config.go:200] "Starting service config controller"
	I1206 09:01:13.672419       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:01:13.668460       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:01:13.673249       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:01:13.668576       1 config.go:309] "Starting node config controller"
	I1206 09:01:13.673273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:01:13.673280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:01:13.673577       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:01:13.673591       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:01:13.774062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:01:13.774132       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:01:13.774188       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [addeaa3548cd] <==
	E1206 09:01:04.640884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:01:04.640891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:01:04.640940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:01:04.640992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:01:04.640997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:01:04.641008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:01:04.641067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:01:05.448301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:01:05.471655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:01:05.476748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:01:05.519278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:01:05.519427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:01:05.520063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:01:05.544570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:01:05.563744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:01:05.582100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:01:05.604260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:01:05.622415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:01:05.643993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:01:05.691142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:01:05.740532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:01:05.750548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:01:05.752414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:01:05.767387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1206 09:01:08.537965       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:08:48 addons-397143 kubelet[2219]: I1206 09:08:48.842192    2219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b6c7a1f-2c77-4d73-b278-72377a567d97-config-volume" (OuterVolumeSpecName: "config-volume") pod "4b6c7a1f-2c77-4d73-b278-72377a567d97" (UID: "4b6c7a1f-2c77-4d73-b278-72377a567d97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 06 09:08:48 addons-397143 kubelet[2219]: I1206 09:08:48.843991    2219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6c7a1f-2c77-4d73-b278-72377a567d97-kube-api-access-cq8wt" (OuterVolumeSpecName: "kube-api-access-cq8wt") pod "4b6c7a1f-2c77-4d73-b278-72377a567d97" (UID: "4b6c7a1f-2c77-4d73-b278-72377a567d97"). InnerVolumeSpecName "kube-api-access-cq8wt". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 09:08:48 addons-397143 kubelet[2219]: I1206 09:08:48.942415    2219 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b6c7a1f-2c77-4d73-b278-72377a567d97-config-volume\") on node \"addons-397143\" DevicePath \"\""
	Dec 06 09:08:48 addons-397143 kubelet[2219]: I1206 09:08:48.942452    2219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cq8wt\" (UniqueName: \"kubernetes.io/projected/4b6c7a1f-2c77-4d73-b278-72377a567d97-kube-api-access-cq8wt\") on node \"addons-397143\" DevicePath \"\""
	Dec 06 09:08:49 addons-397143 kubelet[2219]: I1206 09:08:49.294365    2219 scope.go:117] "RemoveContainer" containerID="2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e"
	Dec 06 09:08:49 addons-397143 kubelet[2219]: I1206 09:08:49.308305    2219 scope.go:117] "RemoveContainer" containerID="2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e"
	Dec 06 09:08:49 addons-397143 kubelet[2219]: E1206 09:08:49.309036    2219 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e" containerID="2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e"
	Dec 06 09:08:49 addons-397143 kubelet[2219]: I1206 09:08:49.309079    2219 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e"} err="failed to get container status \"2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2aced5b71ddb4d56ff41579440b7120a1b6162cc9b25cc3ba3e79f59d7c4689e"
	Dec 06 09:08:51 addons-397143 kubelet[2219]: I1206 09:08:51.025015    2219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6c7a1f-2c77-4d73-b278-72377a567d97" path="/var/lib/kubelet/pods/4b6c7a1f-2c77-4d73-b278-72377a567d97/volumes"
	Dec 06 09:08:51 addons-397143 kubelet[2219]: I1206 09:08:51.025841    2219 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:08:51 addons-397143 kubelet[2219]: E1206 09:08:51.140666    2219 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:08:51 addons-397143 kubelet[2219]: E1206 09:08:51.140713    2219 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:08:51 addons-397143 kubelet[2219]: E1206 09:08:51.140805    2219 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(cdfe45e6-04be-4041-b2cb-1d4867877943): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:08:51 addons-397143 kubelet[2219]: E1206 09:08:51.140832    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:09:05 addons-397143 kubelet[2219]: E1206 09:09:05.020833    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:09:18 addons-397143 kubelet[2219]: E1206 09:09:18.020260    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:09:29 addons-397143 kubelet[2219]: E1206 09:09:29.020008    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:09:44 addons-397143 kubelet[2219]: E1206 09:09:44.020288    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:09:57 addons-397143 kubelet[2219]: E1206 09:09:57.020057    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:09:58 addons-397143 kubelet[2219]: I1206 09:09:58.018407    2219 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:10:11 addons-397143 kubelet[2219]: E1206 09:10:11.020566    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:10:23 addons-397143 kubelet[2219]: E1206 09:10:23.020032    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:10:38 addons-397143 kubelet[2219]: E1206 09:10:38.019396    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:10:53 addons-397143 kubelet[2219]: E1206 09:10:53.020674    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:11:06 addons-397143 kubelet[2219]: E1206 09:11:06.020083    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	
	
	==> storage-provisioner [0f347014fdc7] <==
	W1206 09:10:51.306446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:53.309572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:53.314204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:55.317945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:55.322101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:57.327472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:57.333499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:59.337133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:10:59.342605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:01.346270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:01.350782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:03.354435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:03.360126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:05.363476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:05.367537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:07.370794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:07.376260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:09.379902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:09.384341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:11.388085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:11.393619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:13.396725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:13.400793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:15.404056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:11:15.409405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-397143 -n addons-397143
helpers_test.go:269: (dbg) Run:  kubectl --context addons-397143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-397143 describe pod nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-397143 describe pod nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6: exit status 1 (76.47075ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-397143/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:03:14 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vnp5t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vnp5t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-397143
	  Normal   Pulling    5m14s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m14s (x5 over 8m2s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m14s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    2m51s (x21 over 8m2s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m51s (x21 over 8m2s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6479g (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6479g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g7njc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lnnt6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-397143 describe pod nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable ingress-dns --alsologtostderr -v=1: (1.281575203s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable ingress --alsologtostderr -v=1: (7.648387401s)
--- FAIL: TestAddons/parallel/Ingress (491.75s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (344.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-397143 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-397143 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Non-zero exit: kubectl --context addons-397143 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (3.671µs)
helpers_test.go:404: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:960: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-397143
helpers_test.go:243: (dbg) docker inspect addons-397143:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744",
	        "Created": "2025-12-06T09:00:49.046421689Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 561175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:00:49.075737708Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/hostname",
	        "HostsPath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/hosts",
	        "LogPath": "/var/lib/docker/containers/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744/b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744-json.log",
	        "Name": "/addons-397143",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-397143:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-397143",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b250bd2a2eea34a9e7f899297362f95990eca5a4ab2636bd01f039ff80065744",
	                "LowerDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b545b30d37ed21e7a941e78b8b36b564cca5cd414ccd085e87696ffd0b927f0f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-397143",
	                "Source": "/var/lib/docker/volumes/addons-397143/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-397143",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-397143",
	                "name.minikube.sigs.k8s.io": "addons-397143",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e0557e0dc0aacd78c66943f8b7a28f43c126b8e8ba76dbd40e3e5ed98c50aee",
	            "SandboxKey": "/var/run/docker/netns/7e0557e0dc0a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33171"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33172"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-397143": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "386d3d8bab56f83cae3e94e29d5cbd553f358549165499a0dbdbff3aa9e4e9df",
	                    "EndpointID": "7f265e22bb57aaffdf4d1df099c9fb2eaad4abd4cafb73f039ca10ebc1b7430d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "36:69:eb:9c:9f:3d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-397143",
	                        "b250bd2a2eea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-397143 -n addons-397143
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 logs -n 25
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-955357                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-955357   │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ delete  │ -p download-only-716523                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ download-only-716523   │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ start   │ --download-only -p download-docker-129039 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-129039 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ delete  │ -p download-docker-129039                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-129039 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ start   │ --download-only -p binary-mirror-001335 --alsologtostderr --binary-mirror http://127.0.0.1:37221 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-001335   │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ delete  │ -p binary-mirror-001335                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-001335   │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ addons  │ enable dashboard -p addons-397143                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ addons  │ disable dashboard -p addons-397143                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ start   │ -p addons-397143 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ addons-397143 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ addons-397143 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ enable headlamp -p addons-397143 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:02 UTC │ 06 Dec 25 09:02 UTC │
	│ addons  │ addons-397143 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ ip      │ addons-397143 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-397143                                                                                                                                                                                                                                                                                                                                                                                             │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	│ addons  │ addons-397143 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-397143          │ jenkins │ v1.37.0 │ 06 Dec 25 09:03 UTC │ 06 Dec 25 09:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:00:26
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:00:26.539491  560524 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:00:26.539777  560524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:26.539788  560524 out.go:374] Setting ErrFile to fd 2...
	I1206 09:00:26.539792  560524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:26.540013  560524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:00:26.540583  560524 out.go:368] Setting JSON to false
	I1206 09:00:26.542040  560524 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6174,"bootTime":1765005453,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:00:26.542253  560524 start.go:143] virtualization: kvm guest
	I1206 09:00:26.544089  560524 out.go:179] * [addons-397143] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:00:26.545247  560524 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:00:26.545274  560524 notify.go:221] Checking for updates...
	I1206 09:00:26.547219  560524 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:00:26.548230  560524 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:00:26.549341  560524 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:00:26.550381  560524 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:00:26.551371  560524 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:00:26.552563  560524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:00:26.576082  560524 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:00:26.576217  560524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:26.630549  560524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:00:26.620509466 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:26.630694  560524 docker.go:319] overlay module found
	I1206 09:00:26.632414  560524 out.go:179] * Using the docker driver based on user configuration
	I1206 09:00:26.633456  560524 start.go:309] selected driver: docker
	I1206 09:00:26.633470  560524 start.go:927] validating driver "docker" against <nil>
	I1206 09:00:26.633481  560524 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:00:26.634062  560524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:26.688536  560524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-06 09:00:26.679077232 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:26.688712  560524 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:00:26.688996  560524 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:00:26.690630  560524 out.go:179] * Using Docker driver with root privileges
	I1206 09:00:26.691691  560524 cni.go:84] Creating CNI manager for ""
	I1206 09:00:26.691772  560524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:00:26.691786  560524 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1206 09:00:26.691871  560524 start.go:353] cluster config:
	{Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1206 09:00:26.693227  560524 out.go:179] * Starting "addons-397143" primary control-plane node in "addons-397143" cluster
	I1206 09:00:26.694307  560524 cache.go:134] Beginning downloading kic base image for docker with docker
	I1206 09:00:26.695252  560524 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 09:00:26.696193  560524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1206 09:00:26.696238  560524 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1206 09:00:26.696252  560524 cache.go:65] Caching tarball of preloaded images
	I1206 09:00:26.696281  560524 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 09:00:26.696350  560524 preload.go:238] Found /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1206 09:00:26.696365  560524 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1206 09:00:26.696790  560524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/config.json ...
	I1206 09:00:26.696818  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/config.json: {Name:mk7ac323ce41973b51bf22f2ed203b69de8fdcb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:26.712669  560524 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 09:00:26.712803  560524 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 09:00:26.712830  560524 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 09:00:26.712838  560524 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 09:00:26.712846  560524 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 09:00:26.712852  560524 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 09:00:38.813411  560524 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 09:00:38.813453  560524 cache.go:243] Successfully downloaded all kic artifacts
	I1206 09:00:38.813498  560524 start.go:360] acquireMachinesLock for addons-397143: {Name:mkc730cbc98457a5fee329fd5ee4344cb9be9fdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 09:00:38.813592  560524 start.go:364] duration metric: took 71.446µs to acquireMachinesLock for "addons-397143"
	I1206 09:00:38.813627  560524 start.go:93] Provisioning new machine with config: &{Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 09:00:38.813700  560524 start.go:125] createHost starting for "" (driver="docker")
	I1206 09:00:38.816005  560524 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 09:00:38.816271  560524 start.go:159] libmachine.API.Create for "addons-397143" (driver="docker")
	I1206 09:00:38.816306  560524 client.go:173] LocalClient.Create starting
	I1206 09:00:38.816424  560524 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem
	I1206 09:00:38.882402  560524 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem
	I1206 09:00:38.934646  560524 cli_runner.go:164] Run: docker network inspect addons-397143 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 09:00:38.952678  560524 cli_runner.go:211] docker network inspect addons-397143 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 09:00:38.952763  560524 network_create.go:284] running [docker network inspect addons-397143] to gather additional debugging logs...
	I1206 09:00:38.952787  560524 cli_runner.go:164] Run: docker network inspect addons-397143
	W1206 09:00:38.968739  560524 cli_runner.go:211] docker network inspect addons-397143 returned with exit code 1
	I1206 09:00:38.968771  560524 network_create.go:287] error running [docker network inspect addons-397143]: docker network inspect addons-397143: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-397143 not found
	I1206 09:00:38.968801  560524 network_create.go:289] output of [docker network inspect addons-397143]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-397143 not found
	
	** /stderr **
	I1206 09:00:38.968961  560524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:00:38.985258  560524 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c28870}
	I1206 09:00:38.985313  560524 network_create.go:124] attempt to create docker network addons-397143 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 09:00:38.985369  560524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-397143 addons-397143
	I1206 09:00:39.030379  560524 network_create.go:108] docker network addons-397143 192.168.49.0/24 created
	I1206 09:00:39.030414  560524 kic.go:121] calculated static IP "192.168.49.2" for the "addons-397143" container
	I1206 09:00:39.030485  560524 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 09:00:39.046412  560524 cli_runner.go:164] Run: docker volume create addons-397143 --label name.minikube.sigs.k8s.io=addons-397143 --label created_by.minikube.sigs.k8s.io=true
	I1206 09:00:39.064516  560524 oci.go:103] Successfully created a docker volume addons-397143
	I1206 09:00:39.064597  560524 cli_runner.go:164] Run: docker run --rm --name addons-397143-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-397143 --entrypoint /usr/bin/test -v addons-397143:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 09:00:45.752617  560524 cli_runner.go:217] Completed: docker run --rm --name addons-397143-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-397143 --entrypoint /usr/bin/test -v addons-397143:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (6.687967968s)
	I1206 09:00:45.752646  560524 oci.go:107] Successfully prepared a docker volume addons-397143
	I1206 09:00:45.752701  560524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1206 09:00:45.752713  560524 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 09:00:45.752763  560524 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-397143:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 09:00:48.976506  560524 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-397143:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.223690811s)
	I1206 09:00:48.976540  560524 kic.go:203] duration metric: took 3.223821962s to extract preloaded images to volume ...
	W1206 09:00:48.976631  560524 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1206 09:00:48.976670  560524 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1206 09:00:48.976721  560524 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 09:00:49.029963  560524 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-397143 --name addons-397143 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-397143 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-397143 --network addons-397143 --ip 192.168.49.2 --volume addons-397143:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 09:00:49.281100  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Running}}
	I1206 09:00:49.298871  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:00:49.317772  560524 cli_runner.go:164] Run: docker exec addons-397143 stat /var/lib/dpkg/alternatives/iptables
	I1206 09:00:49.367708  560524 oci.go:144] the created container "addons-397143" has a running status.
	I1206 09:00:49.367738  560524 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa...
	I1206 09:00:49.419747  560524 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 09:00:49.448700  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:00:49.468327  560524 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 09:00:49.468354  560524 kic_runner.go:114] Args: [docker exec --privileged addons-397143 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 09:00:49.508783  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:00:49.529663  560524 machine.go:94] provisionDockerMachine start ...
	I1206 09:00:49.529777  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:49.546707  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:49.547082  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:49.547102  560524 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 09:00:49.547768  560524 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37356->127.0.0.1:33171: read: connection reset by peer
	I1206 09:00:52.674906  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-397143
	
	I1206 09:00:52.674955  560524 ubuntu.go:182] provisioning hostname "addons-397143"
	I1206 09:00:52.675030  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:52.692096  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:52.692362  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:52.692379  560524 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-397143 && echo "addons-397143" | sudo tee /etc/hostname
	I1206 09:00:52.826459  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-397143
	
	I1206 09:00:52.826534  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:52.844114  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:52.844328  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:52.844351  560524 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-397143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-397143/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-397143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 09:00:52.970393  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 09:00:52.970429  560524 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-555179/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-555179/.minikube}
	I1206 09:00:52.970460  560524 ubuntu.go:190] setting up certificates
	I1206 09:00:52.970479  560524 provision.go:84] configureAuth start
	I1206 09:00:52.970557  560524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-397143
	I1206 09:00:52.988476  560524 provision.go:143] copyHostCerts
	I1206 09:00:52.988546  560524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-555179/.minikube/ca.pem (1082 bytes)
	I1206 09:00:52.988681  560524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-555179/.minikube/cert.pem (1123 bytes)
	I1206 09:00:52.988754  560524 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-555179/.minikube/key.pem (1675 bytes)
	I1206 09:00:52.988806  560524 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-555179/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca-key.pem org=jenkins.addons-397143 san=[127.0.0.1 192.168.49.2 addons-397143 localhost minikube]
	I1206 09:00:53.151941  560524 provision.go:177] copyRemoteCerts
	I1206 09:00:53.152005  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 09:00:53.152056  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.170236  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:53.264625  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 09:00:53.283805  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 09:00:53.301549  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 09:00:53.318851  560524 provision.go:87] duration metric: took 348.353095ms to configureAuth
	I1206 09:00:53.318878  560524 ubuntu.go:206] setting minikube options for container-runtime
	I1206 09:00:53.319055  560524 config.go:182] Loaded profile config "addons-397143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:00:53.319113  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.336749  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:53.337018  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:53.337033  560524 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1206 09:00:53.463781  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1206 09:00:53.463805  560524 ubuntu.go:71] root file system type: overlay
	I1206 09:00:53.463959  560524 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1206 09:00:53.464029  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.482200  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:53.482430  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:53.482491  560524 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1206 09:00:53.619715  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1206 09:00:53.619797  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:53.637563  560524 main.go:143] libmachine: Using SSH client type: native
	I1206 09:00:53.637787  560524 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33171 <nil> <nil>}
	I1206 09:00:53.637804  560524 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1206 09:00:54.697022  560524 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-02 21:53:51.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-06 09:00:53.617749558 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1206 09:00:54.697060  560524 machine.go:97] duration metric: took 5.167358872s to provisionDockerMachine
	I1206 09:00:54.697075  560524 client.go:176] duration metric: took 15.880760097s to LocalClient.Create
	I1206 09:00:54.697102  560524 start.go:167] duration metric: took 15.880829581s to libmachine.API.Create "addons-397143"
	I1206 09:00:54.697117  560524 start.go:293] postStartSetup for "addons-397143" (driver="docker")
	I1206 09:00:54.697131  560524 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 09:00:54.697213  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 09:00:54.697256  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.716272  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:54.810724  560524 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 09:00:54.814408  560524 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 09:00:54.814442  560524 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 09:00:54.814455  560524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-555179/.minikube/addons for local assets ...
	I1206 09:00:54.814507  560524 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-555179/.minikube/files for local assets ...
	I1206 09:00:54.814528  560524 start.go:296] duration metric: took 117.404395ms for postStartSetup
	I1206 09:00:54.814805  560524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-397143
	I1206 09:00:54.832349  560524 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/config.json ...
	I1206 09:00:54.832656  560524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:00:54.832709  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.849433  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:54.939127  560524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 09:00:54.943599  560524 start.go:128] duration metric: took 16.129885011s to createHost
	I1206 09:00:54.943624  560524 start.go:83] releasing machines lock for "addons-397143", held for 16.130019742s
	I1206 09:00:54.943680  560524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-397143
	I1206 09:00:54.960708  560524 ssh_runner.go:195] Run: cat /version.json
	I1206 09:00:54.960762  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.960792  560524 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 09:00:54.960884  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:00:54.980101  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:54.980101  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:00:55.124454  560524 ssh_runner.go:195] Run: systemctl --version
	I1206 09:00:55.130888  560524 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 09:00:55.135362  560524 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 09:00:55.135435  560524 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 09:00:55.159478  560524 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1206 09:00:55.159508  560524 start.go:496] detecting cgroup driver to use...
	I1206 09:00:55.159543  560524 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:00:55.159656  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:00:55.173403  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1206 09:00:55.183079  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1206 09:00:55.191439  560524 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1206 09:00:55.191496  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1206 09:00:55.199653  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:00:55.207597  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1206 09:00:55.215442  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1206 09:00:55.223778  560524 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 09:00:55.231575  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1206 09:00:55.240123  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1206 09:00:55.248602  560524 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1206 09:00:55.257201  560524 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 09:00:55.264456  560524 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 09:00:55.271938  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:55.351207  560524 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1206 09:00:55.428761  560524 start.go:496] detecting cgroup driver to use...
	I1206 09:00:55.428816  560524 detect.go:190] detected "systemd" cgroup driver on host os
	I1206 09:00:55.428869  560524 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1206 09:00:55.442519  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:00:55.454239  560524 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 09:00:55.474133  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 09:00:55.485504  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1206 09:00:55.497254  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 09:00:55.510568  560524 ssh_runner.go:195] Run: which cri-dockerd
	I1206 09:00:55.513872  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1206 09:00:55.522377  560524 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1206 09:00:55.534018  560524 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1206 09:00:55.611076  560524 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1206 09:00:55.689715  560524 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I1206 09:00:55.689858  560524 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1206 09:00:55.702617  560524 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1206 09:00:55.713979  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:55.789867  560524 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1206 09:00:56.458167  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 09:00:56.470945  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1206 09:00:56.483771  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1206 09:00:56.495886  560524 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1206 09:00:56.577548  560524 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1206 09:00:56.657320  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:56.736816  560524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1206 09:00:56.761163  560524 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1206 09:00:56.772893  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:56.853472  560524 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1206 09:00:56.923834  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1206 09:00:56.936608  560524 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1206 09:00:56.936678  560524 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1206 09:00:56.940481  560524 start.go:564] Will wait 60s for crictl version
	I1206 09:00:56.940531  560524 ssh_runner.go:195] Run: which crictl
	I1206 09:00:56.943885  560524 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 09:00:56.970128  560524 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.2
	RuntimeApiVersion:  v1
	I1206 09:00:56.970199  560524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 09:00:56.996082  560524 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1206 09:00:57.022774  560524 out.go:252] * Preparing Kubernetes v1.34.2 on Docker 29.1.2 ...
	I1206 09:00:57.022859  560524 cli_runner.go:164] Run: docker network inspect addons-397143 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 09:00:57.040461  560524 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 09:00:57.044415  560524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:00:57.054462  560524 kubeadm.go:884] updating cluster {Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 09:00:57.054587  560524 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1206 09:00:57.054646  560524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 09:00:57.074482  560524 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1206 09:00:57.074506  560524 docker.go:621] Images already preloaded, skipping extraction
	I1206 09:00:57.074570  560524 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1206 09:00:57.094407  560524 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1206 09:00:57.094437  560524 cache_images.go:86] Images are preloaded, skipping loading
	I1206 09:00:57.094450  560524 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 docker true true} ...
	I1206 09:00:57.094575  560524 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-397143 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 09:00:57.094629  560524 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1206 09:00:57.145051  560524 cni.go:84] Creating CNI manager for ""
	I1206 09:00:57.145094  560524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:00:57.145109  560524 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 09:00:57.145129  560524 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-397143 NodeName:addons-397143 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 09:00:57.145267  560524 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-397143"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 09:00:57.145331  560524 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 09:00:57.153792  560524 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 09:00:57.153860  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 09:00:57.162367  560524 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1206 09:00:57.175361  560524 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 09:00:57.187548  560524 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1206 09:00:57.199842  560524 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 09:00:57.203449  560524 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 09:00:57.213429  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:00:57.291868  560524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:00:57.316380  560524 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143 for IP: 192.168.49.2
	I1206 09:00:57.316406  560524 certs.go:195] generating shared ca certs ...
	I1206 09:00:57.316429  560524 certs.go:227] acquiring lock for ca certs: {Name:mk4bb3cf92982779c7f527f324bcd90239618827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.316570  560524 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key
	I1206 09:00:57.392934  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt ...
	I1206 09:00:57.392978  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt: {Name:mk80900300c2b34fed0332b66effe4ef5b1d4e58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.393213  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key ...
	I1206 09:00:57.393233  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key: {Name:mk38293f1c37569e15cfd07f213c8dd4cda75e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.393363  560524 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key
	I1206 09:00:57.471463  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.crt ...
	I1206 09:00:57.471506  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.crt: {Name:mk2189b62446dd0057916c9318855bef74f0d257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.471728  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key ...
	I1206 09:00:57.471747  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key: {Name:mk5b6b93f1431d75fe362b6511741d95c3d131cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.471864  560524 certs.go:257] generating profile certs ...
	I1206 09:00:57.471964  560524 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.key
	I1206 09:00:57.471994  560524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt with IP's: []
	I1206 09:00:57.556379  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt ...
	I1206 09:00:57.556422  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: {Name:mk6e330ed6ad1a27c7376513d660fb6406e8f9db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.556638  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.key ...
	I1206 09:00:57.556658  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.key: {Name:mk20bdf5986eaa27503f234813f0acfbaedc8d6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.556793  560524 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b
	I1206 09:00:57.556829  560524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 09:00:57.622666  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b ...
	I1206 09:00:57.622705  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b: {Name:mkd4c56c0e4e0fa4043dd82252aa03b82255adae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.622938  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b ...
	I1206 09:00:57.622958  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b: {Name:mkc16c3c0d7af7681db53e117f4d6a14268aa281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.623094  560524 certs.go:382] copying /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt.64a9624b -> /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt
	I1206 09:00:57.623240  560524 certs.go:386] copying /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key.64a9624b -> /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key
	I1206 09:00:57.623359  560524 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key
	I1206 09:00:57.623392  560524 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt with IP's: []
	I1206 09:00:57.694161  560524 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt ...
	I1206 09:00:57.694203  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt: {Name:mkafbbf80c073fb0f531ef2ffbc0313f93adfefe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.694431  560524 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key ...
	I1206 09:00:57.694452  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key: {Name:mk7fd2bc77471f6c57074616f679fa918e5d2069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:00:57.694701  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca-key.pem (1679 bytes)
	I1206 09:00:57.694790  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/ca.pem (1082 bytes)
	I1206 09:00:57.694842  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/cert.pem (1123 bytes)
	I1206 09:00:57.694884  560524 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-555179/.minikube/certs/key.pem (1675 bytes)
	I1206 09:00:57.695518  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 09:00:57.714334  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 09:00:57.731487  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 09:00:57.748277  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 09:00:57.765291  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 09:00:57.783015  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 09:00:57.800117  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 09:00:57.817518  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 09:00:57.834259  560524 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 09:00:57.854210  560524 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 09:00:57.866593  560524 ssh_runner.go:195] Run: openssl version
	I1206 09:00:57.872823  560524 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.880032  560524 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 09:00:57.889591  560524 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.893248  560524 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 09:00 /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.893304  560524 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 09:00:57.926811  560524 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 09:00:57.934667  560524 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 09:00:57.942065  560524 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 09:00:57.945683  560524 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 09:00:57.945740  560524 kubeadm.go:401] StartCluster: {Name:addons-397143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-397143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:00:57.945846  560524 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1206 09:00:57.964685  560524 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 09:00:57.972818  560524 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 09:00:57.980556  560524 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 09:00:57.980615  560524 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 09:00:57.988018  560524 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 09:00:57.988038  560524 kubeadm.go:158] found existing configuration files:
	
	I1206 09:00:57.988097  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 09:00:57.995437  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 09:00:57.995496  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 09:00:58.002581  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 09:00:58.009941  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 09:00:58.009995  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 09:00:58.017973  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 09:00:58.025553  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 09:00:58.025604  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 09:00:58.033693  560524 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 09:00:58.042247  560524 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 09:00:58.042310  560524 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 09:00:58.050501  560524 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 09:00:58.112588  560524 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1206 09:00:58.170554  560524 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 09:01:07.786900  560524 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 09:01:07.787025  560524 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 09:01:07.787117  560524 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 09:01:07.787165  560524 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1206 09:01:07.787205  560524 kubeadm.go:319] OS: Linux
	I1206 09:01:07.787243  560524 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 09:01:07.787281  560524 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 09:01:07.787327  560524 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 09:01:07.787414  560524 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 09:01:07.787518  560524 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 09:01:07.787589  560524 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 09:01:07.787658  560524 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 09:01:07.787736  560524 kubeadm.go:319] CGROUPS_IO: enabled
	I1206 09:01:07.787835  560524 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 09:01:07.787987  560524 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 09:01:07.788100  560524 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 09:01:07.788168  560524 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 09:01:07.789727  560524 out.go:252]   - Generating certificates and keys ...
	I1206 09:01:07.789795  560524 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 09:01:07.789855  560524 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 09:01:07.789925  560524 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 09:01:07.789977  560524 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 09:01:07.790043  560524 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 09:01:07.790089  560524 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 09:01:07.790133  560524 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 09:01:07.790232  560524 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-397143 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:01:07.790279  560524 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 09:01:07.790377  560524 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-397143 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 09:01:07.790431  560524 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 09:01:07.790481  560524 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 09:01:07.790517  560524 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 09:01:07.790603  560524 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 09:01:07.790687  560524 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 09:01:07.790762  560524 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 09:01:07.790805  560524 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 09:01:07.790858  560524 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 09:01:07.790927  560524 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 09:01:07.791012  560524 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 09:01:07.791077  560524 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 09:01:07.792313  560524 out.go:252]   - Booting up control plane ...
	I1206 09:01:07.792387  560524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 09:01:07.792456  560524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 09:01:07.792509  560524 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 09:01:07.792591  560524 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 09:01:07.792664  560524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 09:01:07.792753  560524 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 09:01:07.792819  560524 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 09:01:07.792851  560524 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 09:01:07.792977  560524 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 09:01:07.793072  560524 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 09:01:07.793125  560524 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001252402s
	I1206 09:01:07.793207  560524 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 09:01:07.793308  560524 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 09:01:07.793426  560524 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 09:01:07.793526  560524 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 09:01:07.793610  560524 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.004446485s
	I1206 09:01:07.793666  560524 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.765836939s
	I1206 09:01:07.793722  560524 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501152133s
	I1206 09:01:07.793815  560524 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 09:01:07.793937  560524 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 09:01:07.793988  560524 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 09:01:07.794184  560524 kubeadm.go:319] [mark-control-plane] Marking the node addons-397143 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 09:01:07.794239  560524 kubeadm.go:319] [bootstrap-token] Using token: ns4ow5.ed430geiuztapn6x
	I1206 09:01:07.795505  560524 out.go:252]   - Configuring RBAC rules ...
	I1206 09:01:07.795590  560524 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 09:01:07.795664  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 09:01:07.795780  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 09:01:07.795905  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 09:01:07.796030  560524 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 09:01:07.796112  560524 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 09:01:07.796212  560524 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 09:01:07.796259  560524 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 09:01:07.796314  560524 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 09:01:07.796319  560524 kubeadm.go:319] 
	I1206 09:01:07.796370  560524 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 09:01:07.796381  560524 kubeadm.go:319] 
	I1206 09:01:07.796464  560524 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 09:01:07.796473  560524 kubeadm.go:319] 
	I1206 09:01:07.796507  560524 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 09:01:07.796557  560524 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 09:01:07.796604  560524 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 09:01:07.796611  560524 kubeadm.go:319] 
	I1206 09:01:07.796653  560524 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 09:01:07.796662  560524 kubeadm.go:319] 
	I1206 09:01:07.796702  560524 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 09:01:07.796709  560524 kubeadm.go:319] 
	I1206 09:01:07.796760  560524 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 09:01:07.796819  560524 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 09:01:07.796881  560524 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 09:01:07.796887  560524 kubeadm.go:319] 
	I1206 09:01:07.796970  560524 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 09:01:07.797056  560524 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 09:01:07.797061  560524 kubeadm.go:319] 
	I1206 09:01:07.797134  560524 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ns4ow5.ed430geiuztapn6x \
	I1206 09:01:07.797233  560524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f716ac9d2865d69eed0a26d1db8a24df2dd71b0fd1ee780c7774713123bac1e1 \
	I1206 09:01:07.797252  560524 kubeadm.go:319] 	--control-plane 
	I1206 09:01:07.797257  560524 kubeadm.go:319] 
	I1206 09:01:07.797329  560524 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 09:01:07.797335  560524 kubeadm.go:319] 
	I1206 09:01:07.797398  560524 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ns4ow5.ed430geiuztapn6x \
	I1206 09:01:07.797494  560524 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f716ac9d2865d69eed0a26d1db8a24df2dd71b0fd1ee780c7774713123bac1e1 
	I1206 09:01:07.797508  560524 cni.go:84] Creating CNI manager for ""
	I1206 09:01:07.797525  560524 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:01:07.798698  560524 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1206 09:01:07.799613  560524 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1206 09:01:07.807969  560524 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1206 09:01:07.820476  560524 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 09:01:07.820551  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:07.820607  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-397143 minikube.k8s.io/updated_at=2025_12_06T09_01_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-397143 minikube.k8s.io/primary=true
	I1206 09:01:07.831980  560524 ops.go:34] apiserver oom_adj: -16
	I1206 09:01:07.900018  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:08.400477  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:08.900336  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:09.400663  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:09.900321  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:10.400371  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:10.900680  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:11.400334  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:11.900841  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:12.401057  560524 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 09:01:12.476765  560524 kubeadm.go:1114] duration metric: took 4.656262539s to wait for elevateKubeSystemPrivileges
	I1206 09:01:12.476809  560524 kubeadm.go:403] duration metric: took 14.53107513s to StartCluster
	I1206 09:01:12.476830  560524 settings.go:142] acquiring lock: {Name:mk6c714838f6ea9636d4320a94ca67badc317f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:01:12.476994  560524 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:01:12.477464  560524 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-555179/kubeconfig: {Name:mk8a2d601ffa4d6c208ceb157eb91d604defe102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 09:01:12.477679  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 09:01:12.477708  560524 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1206 09:01:12.477778  560524 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 09:01:12.477906  560524 addons.go:70] Setting ingress-dns=true in profile "addons-397143"
	I1206 09:01:12.477935  560524 addons.go:70] Setting volcano=true in profile "addons-397143"
	I1206 09:01:12.477942  560524 addons.go:239] Setting addon ingress-dns=true in "addons-397143"
	I1206 09:01:12.477955  560524 addons.go:239] Setting addon volcano=true in "addons-397143"
	I1206 09:01:12.477953  560524 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-397143"
	I1206 09:01:12.477953  560524 addons.go:70] Setting registry-creds=true in profile "addons-397143"
	I1206 09:01:12.477986  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.477987  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.477994  560524 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-397143"
	I1206 09:01:12.478007  560524 addons.go:70] Setting inspektor-gadget=true in profile "addons-397143"
	I1206 09:01:12.478021  560524 addons.go:239] Setting addon inspektor-gadget=true in "addons-397143"
	I1206 09:01:12.478052  560524 config.go:182] Loaded profile config "addons-397143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:01:12.478066  560524 addons.go:70] Setting metrics-server=true in profile "addons-397143"
	I1206 09:01:12.478086  560524 addons.go:239] Setting addon metrics-server=true in "addons-397143"
	I1206 09:01:12.478101  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.478057  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.478372  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478527  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478548  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478595  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.478635  560524 addons.go:70] Setting volumesnapshots=true in profile "addons-397143"
	I1206 09:01:12.478653  560524 addons.go:239] Setting addon volumesnapshots=true in "addons-397143"
	I1206 09:01:12.478688  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.478704  560524 addons.go:70] Setting storage-provisioner=true in profile "addons-397143"
	I1206 09:01:12.478728  560524 addons.go:239] Setting addon storage-provisioner=true in "addons-397143"
	I1206 09:01:12.478756  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.479209  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.479238  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.479693  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.477902  560524 addons.go:70] Setting yakd=true in profile "addons-397143"
	I1206 09:01:12.480168  560524 addons.go:239] Setting addon yakd=true in "addons-397143"
	I1206 09:01:12.480200  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.480319  560524 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-397143"
	I1206 09:01:12.480380  560524 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-397143"
	I1206 09:01:12.480414  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.481104  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.481327  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.484375  560524 addons.go:70] Setting ingress=true in profile "addons-397143"
	I1206 09:01:12.484404  560524 addons.go:239] Setting addon ingress=true in "addons-397143"
	I1206 09:01:12.480535  560524 out.go:179] * Verifying Kubernetes components...
	I1206 09:01:12.484450  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.477997  560524 addons.go:239] Setting addon registry-creds=true in "addons-397143"
	I1206 09:01:12.480552  560524 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-397143"
	I1206 09:01:12.480564  560524 addons.go:70] Setting cloud-spanner=true in profile "addons-397143"
	I1206 09:01:12.480578  560524 addons.go:70] Setting gcp-auth=true in profile "addons-397143"
	I1206 09:01:12.480587  560524 addons.go:70] Setting default-storageclass=true in profile "addons-397143"
	I1206 09:01:12.480599  560524 addons.go:70] Setting registry=true in profile "addons-397143"
	I1206 09:01:12.480609  560524 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-397143"
	I1206 09:01:12.484964  560524 addons.go:239] Setting addon cloud-spanner=true in "addons-397143"
	I1206 09:01:12.485001  560524 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-397143"
	I1206 09:01:12.485019  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.485096  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.485476  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.485853  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.486122  560524 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-397143"
	I1206 09:01:12.486166  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.486170  560524 addons.go:239] Setting addon registry=true in "addons-397143"
	I1206 09:01:12.486205  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.486614  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.486640  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.486864  560524 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-397143"
	I1206 09:01:12.487236  560524 mustload.go:66] Loading cluster: addons-397143
	I1206 09:01:12.484943  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.489054  560524 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 09:01:12.485004  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.496717  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.497082  560524 config.go:182] Loaded profile config "addons-397143": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:01:12.497363  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.497816  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.533422  560524 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-397143"
	I1206 09:01:12.533841  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.535176  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.537207  560524 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 09:01:12.538688  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 09:01:12.538709  560524 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 09:01:12.538768  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.542673  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 09:01:12.543796  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 09:01:12.544109  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 09:01:12.545082  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 09:01:12.546161  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 09:01:12.547288  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 09:01:12.547471  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 09:01:12.547498  560524 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 09:01:12.547569  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.549604  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 09:01:12.550739  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 09:01:12.552447  560524 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 09:01:12.553669  560524 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 09:01:12.554907  560524 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:01:12.554963  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 09:01:12.555062  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.553692  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 09:01:12.555164  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 09:01:12.555202  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.564727  560524 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 09:01:12.566161  560524 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 09:01:12.566885  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 09:01:12.566907  560524 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 09:01:12.566996  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.567582  560524 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 09:01:12.567601  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 09:01:12.567656  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.579617  560524 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 09:01:12.580901  560524 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 09:01:12.581341  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.582123  560524 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 09:01:12.582141  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 09:01:12.582216  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.586492  560524 addons.go:239] Setting addon default-storageclass=true in "addons-397143"
	I1206 09:01:12.586536  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:12.587059  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:12.588005  560524 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1206 09:01:12.589252  560524 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 09:01:12.589864  560524 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1206 09:01:12.590790  560524 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:01:12.590810  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 09:01:12.590869  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.594460  560524 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1206 09:01:12.598014  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:01:12.598549  560524 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 09:01:12.598951  560524 addons.go:436] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:01:12.598977  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1206 09:01:12.599120  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.600079  560524 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:01:12.600099  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 09:01:12.600173  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.601042  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:01:12.602127  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 09:01:12.603901  560524 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:01:12.605107  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 09:01:12.605234  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.617423  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.619045  560524 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 09:01:12.620657  560524 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:01:12.620678  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 09:01:12.620737  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.620977  560524 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 09:01:12.621967  560524 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:01:12.621985  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 09:01:12.622039  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.632592  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.635115  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.646054  560524 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 09:01:12.648845  560524 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:01:12.648869  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 09:01:12.648987  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.653887  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.658336  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.659661  560524 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 09:01:12.660189  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.662375  560524 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 09:01:12.664310  560524 out.go:179]   - Using image docker.io/busybox:stable
	I1206 09:01:12.665404  560524 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:01:12.665420  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 09:01:12.665488  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.674399  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.675039  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.676290  560524 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 09:01:12.676309  560524 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 09:01:12.676358  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:12.682439  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.687743  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.692129  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.707714  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.711064  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	W1206 09:01:12.711163  560524 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:01:12.711203  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.711223  560524 retry.go:31] will retry after 158.051231ms: ssh: handshake failed: EOF
	I1206 09:01:12.718598  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:12.730956  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	W1206 09:01:12.735503  560524 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 09:01:12.735534  560524 retry.go:31] will retry after 278.0498ms: ssh: handshake failed: EOF
	I1206 09:01:12.748705  560524 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 09:01:12.808924  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 09:01:12.808952  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 09:01:12.832044  560524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 09:01:12.832077  560524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 09:01:12.833212  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 09:01:12.833235  560524 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 09:01:12.854358  560524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 09:01:12.854380  560524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 09:01:12.862635  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 09:01:12.874871  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 09:01:12.874896  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 09:01:12.876000  560524 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:01:12.876022  560524 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 09:01:12.894541  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 09:01:12.894575  560524 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 09:01:12.896838  560524 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 09:01:12.896859  560524 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 09:01:12.912158  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 09:01:12.912561  560524 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 09:01:12.912580  560524 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 09:01:12.920060  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 09:01:12.922464  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 09:01:12.922489  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 09:01:12.931369  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 09:01:12.931574  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 09:01:12.931884  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 09:01:12.934740  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 09:01:12.935185  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 09:01:12.941343  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 09:01:12.945896  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1206 09:01:12.960492  560524 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:01:12.960514  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 09:01:12.960988  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 09:01:12.961009  560524 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 09:01:12.975564  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 09:01:12.975665  560524 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 09:01:12.977614  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 09:01:12.977634  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 09:01:13.021373  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 09:01:13.041637  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 09:01:13.041664  560524 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 09:01:13.043958  560524 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:01:13.044171  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 09:01:13.059562  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 09:01:13.059704  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 09:01:13.101737  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 09:01:13.114966  560524 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 09:01:13.115006  560524 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 09:01:13.121288  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:01:13.131456  560524 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:01:13.131478  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 09:01:13.201875  560524 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 09:01:13.203058  560524 node_ready.go:35] waiting up to 6m0s for node "addons-397143" to be "Ready" ...
	I1206 09:01:13.205984  560524 node_ready.go:49] node "addons-397143" is "Ready"
	I1206 09:01:13.206067  560524 node_ready.go:38] duration metric: took 2.852565ms for node "addons-397143" to be "Ready" ...
	I1206 09:01:13.206099  560524 api_server.go:52] waiting for apiserver process to appear ...
	I1206 09:01:13.206169  560524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:01:13.219718  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 09:01:13.219797  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 09:01:13.256531  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 09:01:13.320712  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 09:01:13.366142  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 09:01:13.366172  560524 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 09:01:13.421754  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 09:01:13.421779  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 09:01:13.607566  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 09:01:13.607599  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1206 09:01:13.677730  560524 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:01:13.677757  560524 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 09:01:13.707204  560524 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-397143" context rescaled to 1 replicas
	I1206 09:01:13.737431  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 09:01:14.115178  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.202814857s)
	I1206 09:01:14.115249  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.195164084s)
	I1206 09:01:14.115304  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.183911038s)
	I1206 09:01:14.115428  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.252699742s)
	I1206 09:01:14.115667  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.184070234s)
	I1206 09:01:14.115730  560524 addons.go:495] Verifying addon metrics-server=true in "addons-397143"
	I1206 09:01:14.866616  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.931783086s)
	I1206 09:01:14.866666  560524 addons.go:495] Verifying addon ingress=true in "addons-397143"
	I1206 09:01:14.867029  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.931634954s)
	I1206 09:01:14.867140  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.935229961s)
	I1206 09:01:14.867266  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.925846261s)
	I1206 09:01:14.867962  560524 out.go:179] * Verifying ingress addon...
	I1206 09:01:14.870173  560524 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 09:01:14.874506  560524 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 09:01:14.874535  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:15.374191  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:15.877901  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:15.914260  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.96829715s)
	I1206 09:01:15.914340  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (2.892942257s)
	I1206 09:01:15.914359  560524 addons.go:495] Verifying addon registry=true in "addons-397143"
	I1206 09:01:15.914636  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.812866152s)
	I1206 09:01:15.914774  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.793451496s)
	W1206 09:01:15.914820  560524 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:01:15.914839  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (2.658210309s)
	I1206 09:01:15.914851  560524 retry.go:31] will retry after 235.577069ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 09:01:15.914788  560524 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.708588684s)
	I1206 09:01:15.914941  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.594176794s)
	I1206 09:01:15.914885  560524 api_server.go:72] duration metric: took 3.43714198s to wait for apiserver process to appear ...
	I1206 09:01:15.915156  560524 api_server.go:88] waiting for apiserver healthz status ...
	I1206 09:01:15.915182  560524 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 09:01:15.915235  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.1777014s)
	I1206 09:01:15.915255  560524 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-397143"
	I1206 09:01:15.917085  560524 out.go:179] * Verifying registry addon...
	I1206 09:01:15.917096  560524 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-397143 service yakd-dashboard -n yakd-dashboard
	
	I1206 09:01:15.917094  560524 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 09:01:15.919365  560524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 09:01:15.920749  560524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 09:01:15.923542  560524 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 09:01:15.925084  560524 api_server.go:141] control plane version: v1.34.2
	I1206 09:01:15.925111  560524 api_server.go:131] duration metric: took 9.945321ms to wait for apiserver health ...
	I1206 09:01:15.925122  560524 system_pods.go:43] waiting for kube-system pods to appear ...
	W1206 09:01:15.928076  560524 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1206 09:01:15.978527  560524 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 09:01:15.978678  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:15.983004  560524 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 09:01:15.983030  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:15.983620  560524 system_pods.go:59] 20 kube-system pods found
	I1206 09:01:15.983673  560524 system_pods.go:61] "amd-gpu-device-plugin-7v6qd" [e7bf466a-43c8-41cb-9860-7d52e5aff252] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:01:15.983688  560524 system_pods.go:61] "coredns-66bc5c9577-fpvgk" [41ee2021-f73e-404a-b1a3-00dfc267d583] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:15.983709  560524 system_pods.go:61] "coredns-66bc5c9577-x6rcj" [751d4d7f-a16d-4565-bff1-92503cdd9a58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:15.983741  560524 system_pods.go:61] "csi-hostpath-attacher-0" [5d7370f7-00bc-4ee4-a1e5-aa9eeb7fb030] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:01:15.983757  560524 system_pods.go:61] "csi-hostpath-resizer-0" [a5cda1a5-ed95-4107-8e42-afd26d252c12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:01:15.983772  560524 system_pods.go:61] "csi-hostpathplugin-w827s" [4cfe1385-8cf1-4411-bd96-1bb35c41e598] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:01:15.983784  560524 system_pods.go:61] "etcd-addons-397143" [61458027-2120-4ba5-ae6f-72c25bbb629a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:01:15.983801  560524 system_pods.go:61] "kube-apiserver-addons-397143" [04461dfb-ba60-400b-bc1b-eb5cecb4e116] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:01:15.983846  560524 system_pods.go:61] "kube-controller-manager-addons-397143" [a6b3b762-cc38-492a-8592-f0622969a194] Running
	I1206 09:01:15.983856  560524 system_pods.go:61] "kube-ingress-dns-minikube" [33c3e281-3395-41a9-a0d7-5abebf00c814] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:01:15.983863  560524 system_pods.go:61] "kube-proxy-6fcf7" [31f6821e-c37c-4b3d-8b96-240fb218fc3f] Running
	I1206 09:01:15.983874  560524 system_pods.go:61] "kube-scheduler-addons-397143" [e6e2d9e1-c8bf-4735-b568-8de4d6e7a1c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:01:15.983891  560524 system_pods.go:61] "metrics-server-85b7d694d7-kf4gp" [028fa0be-ce61-4a1a-88bc-1cb6d15e3e69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:01:15.983929  560524 system_pods.go:61] "nvidia-device-plugin-daemonset-znf8f" [17e7dbb3-481b-40e2-95e0-1b3aeb866481] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:01:15.983958  560524 system_pods.go:61] "registry-6b586f9694-zpjdp" [548aae88-07b1-44c4-be9a-0e70e03f5eb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:01:15.983973  560524 system_pods.go:61] "registry-creds-764b6fb674-2jnnm" [801b6007-fb7e-483b-8257-f46427397644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:01:15.983981  560524 system_pods.go:61] "registry-proxy-rxwrl" [29e4f265-043b-4c75-862c-a02beb7c6e1e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:01:15.984004  560524 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9bpfc" [f513e41b-59c9-49c0-b134-9dde97dcaa6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:15.984013  560524 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kq9t8" [4a77f10b-ff26-482a-9c59-65660620545d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:15.984019  560524 system_pods.go:61] "storage-provisioner" [16d5c01b-4db2-4009-b381-13397dfde1d0] Running
	I1206 09:01:15.984028  560524 system_pods.go:74] duration metric: took 58.899165ms to wait for pod list to return data ...
	I1206 09:01:15.984039  560524 default_sa.go:34] waiting for default service account to be created ...
	I1206 09:01:15.987167  560524 default_sa.go:45] found service account: "default"
	I1206 09:01:15.987193  560524 default_sa.go:55] duration metric: took 3.148522ms for default service account to be created ...
	I1206 09:01:15.987205  560524 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 09:01:16.082329  560524 system_pods.go:86] 20 kube-system pods found
	I1206 09:01:16.082374  560524 system_pods.go:89] "amd-gpu-device-plugin-7v6qd" [e7bf466a-43c8-41cb-9860-7d52e5aff252] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1206 09:01:16.082385  560524 system_pods.go:89] "coredns-66bc5c9577-fpvgk" [41ee2021-f73e-404a-b1a3-00dfc267d583] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:16.082403  560524 system_pods.go:89] "coredns-66bc5c9577-x6rcj" [751d4d7f-a16d-4565-bff1-92503cdd9a58] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 09:01:16.082416  560524 system_pods.go:89] "csi-hostpath-attacher-0" [5d7370f7-00bc-4ee4-a1e5-aa9eeb7fb030] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 09:01:16.082424  560524 system_pods.go:89] "csi-hostpath-resizer-0" [a5cda1a5-ed95-4107-8e42-afd26d252c12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 09:01:16.082434  560524 system_pods.go:89] "csi-hostpathplugin-w827s" [4cfe1385-8cf1-4411-bd96-1bb35c41e598] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 09:01:16.082451  560524 system_pods.go:89] "etcd-addons-397143" [61458027-2120-4ba5-ae6f-72c25bbb629a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 09:01:16.082461  560524 system_pods.go:89] "kube-apiserver-addons-397143" [04461dfb-ba60-400b-bc1b-eb5cecb4e116] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 09:01:16.082471  560524 system_pods.go:89] "kube-controller-manager-addons-397143" [a6b3b762-cc38-492a-8592-f0622969a194] Running
	I1206 09:01:16.082480  560524 system_pods.go:89] "kube-ingress-dns-minikube" [33c3e281-3395-41a9-a0d7-5abebf00c814] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 09:01:16.082491  560524 system_pods.go:89] "kube-proxy-6fcf7" [31f6821e-c37c-4b3d-8b96-240fb218fc3f] Running
	I1206 09:01:16.082500  560524 system_pods.go:89] "kube-scheduler-addons-397143" [e6e2d9e1-c8bf-4735-b568-8de4d6e7a1c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 09:01:16.082508  560524 system_pods.go:89] "metrics-server-85b7d694d7-kf4gp" [028fa0be-ce61-4a1a-88bc-1cb6d15e3e69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 09:01:16.082517  560524 system_pods.go:89] "nvidia-device-plugin-daemonset-znf8f" [17e7dbb3-481b-40e2-95e0-1b3aeb866481] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 09:01:16.082533  560524 system_pods.go:89] "registry-6b586f9694-zpjdp" [548aae88-07b1-44c4-be9a-0e70e03f5eb2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 09:01:16.082547  560524 system_pods.go:89] "registry-creds-764b6fb674-2jnnm" [801b6007-fb7e-483b-8257-f46427397644] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 09:01:16.082560  560524 system_pods.go:89] "registry-proxy-rxwrl" [29e4f265-043b-4c75-862c-a02beb7c6e1e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 09:01:16.082574  560524 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9bpfc" [f513e41b-59c9-49c0-b134-9dde97dcaa6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:16.082587  560524 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kq9t8" [4a77f10b-ff26-482a-9c59-65660620545d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 09:01:16.082593  560524 system_pods.go:89] "storage-provisioner" [16d5c01b-4db2-4009-b381-13397dfde1d0] Running
	I1206 09:01:16.082603  560524 system_pods.go:126] duration metric: took 95.390308ms to wait for k8s-apps to be running ...
	I1206 09:01:16.082613  560524 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 09:01:16.082672  560524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:01:16.101439  560524 system_svc.go:56] duration metric: took 18.813639ms WaitForService to wait for kubelet
	I1206 09:01:16.101478  560524 kubeadm.go:587] duration metric: took 3.623733904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 09:01:16.101503  560524 node_conditions.go:102] verifying NodePressure condition ...
	I1206 09:01:16.104673  560524 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1206 09:01:16.104710  560524 node_conditions.go:123] node cpu capacity is 8
	I1206 09:01:16.104732  560524 node_conditions.go:105] duration metric: took 3.221231ms to run NodePressure ...
	I1206 09:01:16.104748  560524 start.go:242] waiting for startup goroutines ...
	I1206 09:01:16.150736  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 09:01:16.374355  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:16.474630  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:16.474649  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:16.874595  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:16.924250  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:16.925463  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:17.374139  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:17.474704  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:17.474934  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:17.874625  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:17.936232  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:17.936489  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:18.373789  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:18.474746  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:18.474967  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:18.748694  560524 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.597908981s)
	I1206 09:01:18.874592  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:18.975661  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:18.975839  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:19.375294  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:19.475391  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:19.475635  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:19.874372  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:19.923637  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:19.923636  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:19.988427  560524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 09:01:19.988506  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:20.013006  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:20.121268  560524 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 09:01:20.138006  560524 addons.go:239] Setting addon gcp-auth=true in "addons-397143"
	I1206 09:01:20.138077  560524 host.go:66] Checking if "addons-397143" exists ...
	I1206 09:01:20.138517  560524 cli_runner.go:164] Run: docker container inspect addons-397143 --format={{.State.Status}}
	I1206 09:01:20.160717  560524 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 09:01:20.160780  560524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-397143
	I1206 09:01:20.182867  560524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/addons-397143/id_rsa Username:docker}
	I1206 09:01:20.283476  560524 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 09:01:20.284929  560524 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 09:01:20.285954  560524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 09:01:20.285977  560524 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 09:01:20.301724  560524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 09:01:20.301749  560524 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 09:01:20.317310  560524 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:01:20.317334  560524 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 09:01:20.333547  560524 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 09:01:20.374360  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:20.423398  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:20.423565  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:20.710055  560524 addons.go:495] Verifying addon gcp-auth=true in "addons-397143"
	I1206 09:01:20.711695  560524 out.go:179] * Verifying gcp-auth addon...
	I1206 09:01:20.714378  560524 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 09:01:20.716846  560524 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 09:01:20.716870  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:20.874932  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:20.922982  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:20.923216  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:21.418604  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:21.419016  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:21.422465  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:21.423162  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:21.718363  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:21.873843  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:21.974662  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:21.975038  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:22.217716  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:22.373626  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:22.422901  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:22.424209  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:22.717880  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:22.873899  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:22.923166  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:22.923520  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:23.217411  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:23.373732  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:23.423353  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:23.424386  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:23.717719  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:23.874090  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:23.923022  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:23.923506  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:24.217799  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:24.374269  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:24.423436  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:24.423768  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:24.718556  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:24.873245  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:24.924519  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:24.924590  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:25.217781  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:25.374437  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:25.423356  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:25.423667  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:25.717553  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:25.874423  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:25.975702  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:25.976154  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:26.218184  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:26.374083  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:26.423742  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:26.423942  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:26.718904  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:26.874412  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:26.935940  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:26.975472  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:27.217948  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:27.374749  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:27.423181  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:27.423239  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:27.718475  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:27.873518  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:27.923490  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:27.923750  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:28.217627  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:28.374122  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:28.474332  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:28.474409  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:28.718288  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:28.872993  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:28.922734  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:28.923229  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:29.217373  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:29.374166  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:29.423444  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:29.423469  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:29.718531  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:29.873852  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:29.923138  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:29.924356  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:30.218103  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:30.374940  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:30.423797  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:30.424005  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:30.717852  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:30.954652  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:30.954714  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:30.954830  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:31.218067  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:31.374239  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:31.423063  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:31.423647  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:31.717891  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:31.873874  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:31.923089  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:31.923262  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:32.217444  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:32.373976  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:32.423332  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:32.423553  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:32.754594  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:32.873785  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:32.922615  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:32.924405  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 09:01:33.218120  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:33.374475  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:33.423564  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:33.423840  560524 kapi.go:107] duration metric: took 17.503090524s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 09:01:33.718190  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:33.874004  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:33.923121  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:34.218238  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:34.374948  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:34.423160  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:34.718033  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:34.892803  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:34.922649  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:35.218385  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:35.373474  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:35.423582  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:35.717603  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:35.873680  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:35.922943  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:36.218306  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:36.373850  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:36.422772  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:36.718229  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:36.873306  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:36.923493  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:37.217894  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:37.374306  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:37.423630  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:37.718152  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:37.891722  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:37.922114  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:38.217785  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:38.373452  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:38.451417  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:38.717515  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:38.873851  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:38.922896  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:39.264105  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:39.374593  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:39.423074  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:39.734042  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:39.874139  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:39.923126  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:40.217751  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:40.373925  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:40.423235  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:40.718315  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:40.884158  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:40.923194  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:41.218704  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:41.376208  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:41.424075  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:41.718414  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:41.873424  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:41.923874  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:42.218110  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:42.374173  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:42.423034  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:42.718620  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:42.873966  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:42.922939  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:43.218392  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:43.373738  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:43.423233  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:43.717372  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:43.873286  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:43.923708  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:44.217500  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:44.374023  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:44.422962  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:44.718430  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:44.874076  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:44.923115  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:45.259763  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:45.374049  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:45.423206  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:45.718080  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:45.876131  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:45.975142  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:46.218326  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:46.373340  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:46.423016  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:46.718549  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:46.873625  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:46.922667  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:47.218097  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:47.373618  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:47.474875  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:47.717687  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:47.874217  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:47.923578  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:48.217765  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:48.432005  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:48.432332  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:48.719867  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:48.936736  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:48.936829  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:49.246385  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:49.373707  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:49.422957  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:49.718435  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:49.874292  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:49.975172  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:50.218244  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:50.373729  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:50.422704  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:50.718147  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:50.873930  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:50.922984  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:51.218045  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:51.374625  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:51.422702  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:51.718047  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:51.874354  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:51.923423  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:52.235203  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:52.373950  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:52.459151  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:52.718770  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:52.873740  560524 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 09:01:52.923207  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:53.218709  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:53.374471  560524 kapi.go:107] duration metric: took 38.504297557s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 09:01:53.423630  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:53.717673  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:53.922844  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:54.217858  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:54.423200  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:54.718258  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:54.923580  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:55.217751  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:55.423617  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:55.717654  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 09:01:55.923389  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:56.218089  560524 kapi.go:107] duration metric: took 35.503708394s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 09:01:56.219717  560524 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-397143 cluster.
	I1206 09:01:56.221089  560524 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 09:01:56.222339  560524 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 09:01:56.424290  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:56.923312  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:57.429386  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:57.923233  560524 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 09:01:58.423286  560524 kapi.go:107] duration metric: took 42.503917285s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 09:01:58.424968  560524 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, metrics-server, cloud-spanner, storage-provisioner, inspektor-gadget, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 09:01:58.426064  560524 addons.go:530] duration metric: took 45.948293408s for enable addons: enabled=[registry-creds nvidia-device-plugin amd-gpu-device-plugin ingress-dns metrics-server cloud-spanner storage-provisioner inspektor-gadget volcano yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 09:01:58.426109  560524 start.go:247] waiting for cluster config update ...
	I1206 09:01:58.426129  560524 start.go:256] writing updated cluster config ...
	I1206 09:01:58.426393  560524 ssh_runner.go:195] Run: rm -f paused
	I1206 09:01:58.430366  560524 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:01:58.433776  560524 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fpvgk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.437927  560524 pod_ready.go:94] pod "coredns-66bc5c9577-fpvgk" is "Ready"
	I1206 09:01:58.437946  560524 pod_ready.go:86] duration metric: took 4.149963ms for pod "coredns-66bc5c9577-fpvgk" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.439684  560524 pod_ready.go:83] waiting for pod "etcd-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.443125  560524 pod_ready.go:94] pod "etcd-addons-397143" is "Ready"
	I1206 09:01:58.443142  560524 pod_ready.go:86] duration metric: took 3.44013ms for pod "etcd-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.444851  560524 pod_ready.go:83] waiting for pod "kube-apiserver-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.448179  560524 pod_ready.go:94] pod "kube-apiserver-addons-397143" is "Ready"
	I1206 09:01:58.448197  560524 pod_ready.go:86] duration metric: took 3.321895ms for pod "kube-apiserver-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.450048  560524 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:58.834021  560524 pod_ready.go:94] pod "kube-controller-manager-addons-397143" is "Ready"
	I1206 09:01:58.834055  560524 pod_ready.go:86] duration metric: took 383.980324ms for pod "kube-controller-manager-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:59.033884  560524 pod_ready.go:83] waiting for pod "kube-proxy-6fcf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:59.434807  560524 pod_ready.go:94] pod "kube-proxy-6fcf7" is "Ready"
	I1206 09:01:59.434840  560524 pod_ready.go:86] duration metric: took 400.931468ms for pod "kube-proxy-6fcf7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:01:59.634261  560524 pod_ready.go:83] waiting for pod "kube-scheduler-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:02:00.034546  560524 pod_ready.go:94] pod "kube-scheduler-addons-397143" is "Ready"
	I1206 09:02:00.034578  560524 pod_ready.go:86] duration metric: took 400.292578ms for pod "kube-scheduler-addons-397143" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 09:02:00.034593  560524 pod_ready.go:40] duration metric: took 1.604194688s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 09:02:00.080362  560524 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1206 09:02:00.083042  560524 out.go:179] * Done! kubectl is now configured to use "addons-397143" cluster and "default" namespace by default
	
	
	==> Docker <==
	Dec 06 09:03:54 addons-397143 dockerd[1058]: time="2025-12-06T09:03:54.122965272Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:03:57 addons-397143 dockerd[1058]: time="2025-12-06T09:03:57.039631482Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:03:57 addons-397143 dockerd[1058]: time="2025-12-06T09:03:57.072522331Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:04:42 addons-397143 dockerd[1058]: time="2025-12-06T09:04:42.125619351Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:04:46 addons-397143 dockerd[1058]: time="2025-12-06T09:04:46.037801315Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:04:46 addons-397143 dockerd[1058]: time="2025-12-06T09:04:46.067571484Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:05:16 addons-397143 dockerd[1058]: time="2025-12-06T09:05:16.786147314Z" level=info msg="ignoring event" container=810130a2493ee67cdfd9bb2208256c06a022d82e0a5ca930fec2771ac08604a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:05:32 addons-397143 dockerd[1058]: time="2025-12-06T09:05:32.082224586Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=071e8ddda600 ep=k8s_POD_helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7_local-path-storage_b7c7ce96-4280-453f-9d10-2c364f13cce8_0 net=none nid=529a01a6e280
	Dec 06 09:05:32 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:05:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/530f9e22a8e0b07eac58ae3ac1b2927b0066568afff9e1b8fc98ad222981c521/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:05:32 addons-397143 dockerd[1058]: time="2025-12-06T09:05:32.166796282Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:05:32 addons-397143 dockerd[1058]: time="2025-12-06T09:05:32.198949472Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:05:47 addons-397143 dockerd[1058]: time="2025-12-06T09:05:47.041562993Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:05:47 addons-397143 dockerd[1058]: time="2025-12-06T09:05:47.130242013Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:05:47 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:05:47Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	Dec 06 09:06:03 addons-397143 dockerd[1058]: time="2025-12-06T09:06:03.112886725Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:06:10 addons-397143 dockerd[1058]: time="2025-12-06T09:06:10.036441321Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:06:10 addons-397143 dockerd[1058]: time="2025-12-06T09:06:10.064268895Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:07:01 addons-397143 dockerd[1058]: time="2025-12-06T09:07:01.054825653Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:07:01 addons-397143 dockerd[1058]: time="2025-12-06T09:07:01.087514170Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:07:32 addons-397143 dockerd[1058]: time="2025-12-06T09:07:32.218592509Z" level=info msg="ignoring event" container=530f9e22a8e0b07eac58ae3ac1b2927b0066568afff9e1b8fc98ad222981c521 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:08:02 addons-397143 dockerd[1058]: time="2025-12-06T09:08:02.508627708Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=79c6ca7d884b ep=k8s_POD_helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7_local-path-storage_8db43445-3765-4849-9a81-d1ec4f306fa2_0 net=none nid=529a01a6e280
	Dec 06 09:08:02 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:08:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41e2be6e26253b7da9000ac1472286f9b5178bf6ce3789c483fc983f21467f6d/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:08:02 addons-397143 dockerd[1058]: time="2025-12-06T09:08:02.592869668Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:08:02 addons-397143 dockerd[1058]: time="2025-12-06T09:08:02.681221861Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:08:02 addons-397143 cri-dockerd[1348]: time="2025-12-06T09:08:02Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Pulling from library/busybox"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	cf6f4d7de1c33       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          5 minutes ago       Running             busybox                   0                   5e3979b5feb75       busybox                                    default
	b59acd257d9ee       registry.k8s.io/ingress-nginx/controller@sha256:e4127065d0317bd11dc64c4dd38dcf7fb1c3d72e468110b4086e636dbaac943d             6 minutes ago       Running             controller                0                   80cc5f6e2300f       ingress-nginx-controller-6c8bf45fb-qd5t5   ingress-nginx
	117e4b540e15b       884bd0ac01c8f                                                                                                                6 minutes ago       Exited              patch                     1                   e1f813548887b       ingress-nginx-admission-patch-lnnt6        ingress-nginx
	eda6476e37832       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   6 minutes ago       Exited              create                    0                   07f39fab88723       ingress-nginx-admission-create-g7njc       ingress-nginx
	2aced5b71ddb4       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       6 minutes ago       Running             local-path-provisioner    0                   e9bbd8b257041       local-path-provisioner-648f6765c9-6969m    local-path-storage
	f7af93ac9f80e       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         6 minutes ago       Running             minikube-ingress-dns      0                   e7e63b5ad38d4       kube-ingress-dns-minikube                  kube-system
	0f347014fdc71       6e38f40d628db                                                                                                                7 minutes ago       Running             storage-provisioner       0                   b2b59bab762c4       storage-provisioner                        kube-system
	066c6444849e6       52546a367cc9e                                                                                                                7 minutes ago       Running             coredns                   0                   be640513e14ca       coredns-66bc5c9577-fpvgk                   kube-system
	f4c3a7ec7956b       8aa150647e88a                                                                                                                7 minutes ago       Running             kube-proxy                0                   7970b8f8cb298       kube-proxy-6fcf7                           kube-system
	c050fa440776a       a5f569d49a979                                                                                                                7 minutes ago       Running             kube-apiserver            0                   d6f0c21d418ca       kube-apiserver-addons-397143               kube-system
	f1e917c087344       a3e246e9556e9                                                                                                                7 minutes ago       Running             etcd                      0                   f3ea735827e52       etcd-addons-397143                         kube-system
	c79c3ddffef14       01e8bacf0f500                                                                                                                7 minutes ago       Running             kube-controller-manager   0                   974acc705e827       kube-controller-manager-addons-397143      kube-system
	addeaa3548cdc       88320b5498ff2                                                                                                                7 minutes ago       Running             kube-scheduler            0                   ff08c61d5f918       kube-scheduler-addons-397143               kube-system
	
	
	==> controller_ingress [b59acd257d9e] <==
	I1206 09:01:53.750358       7 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1206 09:01:53.750856       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1206 09:01:53.756904       7 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1206 09:01:53.757006       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-6c8bf45fb-qd5t5"
	I1206 09:01:53.789789       7 controller.go:228] "Backend successfully reloaded"
	I1206 09:01:53.789876       7 controller.go:240] "Initial sync, sleeping for 1 second"
	I1206 09:01:53.790003       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6c8bf45fb-qd5t5", UID:"f88ce9b4-7922-4a77-8d6f-79b4c4429a31", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1206 09:01:53.819283       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-qd5t5" node="addons-397143"
	I1206 09:01:53.827712       7 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-qd5t5" node="addons-397143"
	W1206 09:03:14.513965       7 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1206 09:03:14.515196       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1206 09:03:14.518595       7 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I1206 09:03:14.518816       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"051367f5-3640-4236-a394-0b65adee15af", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1815", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1206 09:03:16.691150       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1206 09:03:16.692468       7 controller.go:214] "Configuration changes detected, backend reload required"
	I1206 09:03:16.744073       7 controller.go:228] "Backend successfully reloaded"
	I1206 09:03:16.744329       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6c8bf45fb-qd5t5", UID:"f88ce9b4-7922-4a77-8d6f-79b4c4429a31", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1206 09:03:20.024051       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:28.019899       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:38.414023       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:41.750037       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1206 09:03:45.081029       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1206 09:03:53.765840       7 status.go:311] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I1206 09:03:53.769629       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"051367f5-3640-4236-a394-0b65adee15af", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2172", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1206 09:03:53.769688       7 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [066c6444849e] <==
	[INFO] 10.244.0.8:52658 - 42477 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000166099s
	[INFO] 10.244.0.8:47411 - 48208 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000084729s
	[INFO] 10.244.0.8:47411 - 47850 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000108381s
	[INFO] 10.244.0.8:32988 - 11187 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000070128s
	[INFO] 10.244.0.8:32988 - 11414 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000091993s
	[INFO] 10.244.0.8:45533 - 40886 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000072829s
	[INFO] 10.244.0.8:45533 - 40614 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000102097s
	[INFO] 10.244.0.8:52408 - 60382 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132537s
	[INFO] 10.244.0.8:52408 - 60102 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000165557s
	[INFO] 10.244.0.27:48995 - 17822 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000343105s
	[INFO] 10.244.0.27:58851 - 10520 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460137s
	[INFO] 10.244.0.27:48846 - 61693 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136292s
	[INFO] 10.244.0.27:48229 - 1407 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159639s
	[INFO] 10.244.0.27:60678 - 33804 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137009s
	[INFO] 10.244.0.27:44355 - 57346 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00015559s
	[INFO] 10.244.0.27:34301 - 61571 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004710227s
	[INFO] 10.244.0.27:36034 - 2819 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007062006s
	[INFO] 10.244.0.27:42498 - 37577 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005544323s
	[INFO] 10.244.0.27:45026 - 21893 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006596011s
	[INFO] 10.244.0.27:49738 - 57538 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003748503s
	[INFO] 10.244.0.27:52981 - 58245 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005803702s
	[INFO] 10.244.0.27:42342 - 3936 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001832996s
	[INFO] 10.244.0.27:56619 - 208 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.002127039s
	[INFO] 10.244.0.32:41418 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000507144s
	[INFO] 10.244.0.32:34882 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000191563s
	
	
	==> describe nodes <==
	Name:               addons-397143
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-397143
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-397143
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_01_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-397143
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:01:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-397143
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:08:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:03:40 +0000   Sat, 06 Dec 2025 09:01:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:03:40 +0000   Sat, 06 Dec 2025 09:01:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:03:40 +0000   Sat, 06 Dec 2025 09:01:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:03:40 +0000   Sat, 06 Dec 2025 09:01:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-397143
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                0d1db992-e563-4087-8925-e25804a95f3c
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m38s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-qd5t5                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m3s
	  kube-system                 coredns-66bc5c9577-fpvgk                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m5s
	  kube-system                 etcd-addons-397143                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m10s
	  kube-system                 kube-apiserver-addons-397143                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-controller-manager-addons-397143                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m3s
	  kube-system                 kube-proxy-6fcf7                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 kube-scheduler-addons-397143                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  local-path-storage          helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  local-path-storage          local-path-provisioner-648f6765c9-6969m                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m3s   kube-proxy       
	  Normal  Starting                 7m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m10s  kubelet          Node addons-397143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s  kubelet          Node addons-397143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s  kubelet          Node addons-397143 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m6s   node-controller  Node addons-397143 event: Registered Node addons-397143 in Controller
	  Normal  NodeReady                7m6s   kubelet          Node addons-397143 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 46 ee ab 83 85 8a 08 06
	[  +0.768089] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 62 35 57 c8 5d fa 08 06
	[  +3.986685] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8a 13 a9 14 7f 58 08 06
	[  +0.848154] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 8a 6f 1a 22 ad 40 08 06
	[  +0.251239] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0e a7 c4 94 ef 5a 08 06
	[  +0.431184] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 41 2d 73 86 ed 08 06
	[  +0.515220] IPv4: martian source 10.244.0.8 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.026943] IPv4: martian source 10.244.0.8 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	
	
	==> etcd [f1e917c08734] <==
	{"level":"warn","ts":"2025-12-06T09:01:04.163424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:04.169668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:04.184963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:04.191774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:04.197980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:04.242893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:16.411510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:16.420778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52146","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:01:18.124593Z","caller":"traceutil/trace.go:172","msg":"trace[1481053207] transaction","detail":"{read_only:false; response_revision:922; number_of_response:1; }","duration":"138.501901ms","start":"2025-12-06T09:01:17.986070Z","end":"2025-12-06T09:01:18.124572Z","steps":["trace[1481053207] 'process raft request'  (duration: 138.358285ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:01:21.417420Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.244323ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041795505965803 > lease_revoke:<id:70cc9af2e4a944f8>","response":"size:29"}
	{"level":"info","ts":"2025-12-06T09:01:21.574870Z","caller":"traceutil/trace.go:172","msg":"trace[692152230] transaction","detail":"{read_only:false; response_revision:990; number_of_response:1; }","duration":"124.890399ms","start":"2025-12-06T09:01:21.449960Z","end":"2025-12-06T09:01:21.574851Z","steps":["trace[692152230] 'process raft request'  (duration: 124.720317ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:01:38.157844Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.27016ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-06T09:01:38.157937Z","caller":"traceutil/trace.go:172","msg":"trace[380136476] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1074; }","duration":"161.350974ms","start":"2025-12-06T09:01:37.996547Z","end":"2025-12-06T09:01:38.157898Z","steps":["trace[380136476] 'range keys from in-memory index tree'  (duration: 161.20193ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-06T09:01:41.694404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.728629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.760808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.787442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.796496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.823391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.835451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.848253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.862528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.871119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.883399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:01:41.897318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37114","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:08:17 up  1:50,  0 user,  load average: 0.13, 0.95, 1.80
	Linux addons-397143 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [c050fa440776] <==
	W1206 09:02:29.377460       1 cacher.go:182] Terminating all watchers from cacher jobs.batch.volcano.sh
	W1206 09:02:29.385766       1 cacher.go:182] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W1206 09:02:29.407678       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1206 09:02:29.725514       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1206 09:02:29.830558       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1206 09:02:49.742611       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51456: use of closed network connection
	E1206 09:02:49.952113       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:51484: use of closed network connection
	I1206 09:02:59.512313       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.183.137"}
	I1206 09:03:14.516093       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 09:03:14.702841       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.22.8"}
	I1206 09:03:15.657550       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 09:03:37.626555       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.626610       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.641121       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.641162       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.645766       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.645820       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.659460       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.659509       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1206 09:03:37.676854       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1206 09:03:37.676948       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1206 09:03:38.642263       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1206 09:03:38.677627       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1206 09:03:38.696416       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1206 09:03:41.208446       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [c79c3ddffef1] <==
	E1206 09:07:30.556593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:07:33.046191       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:07:33.047190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:07:42.035936       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:07:42.036875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:07:45.778001       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:07:45.778901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:07:46.570609       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:07:46.571590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:07:47.937050       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:07:47.938010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:07:52.765353       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:07:52.766361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:07:52.816302       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:07:52.817206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:08:02.430681       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:08:02.431659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:08:07.084435       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:08:07.085355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:08:10.708661       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:08:10.709681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:08:15.683414       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:08:15.684412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1206 09:08:16.714340       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1206 09:08:16.715448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [f4c3a7ec7956] <==
	I1206 09:01:13.298363       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:01:13.419167       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:01:13.524968       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:01:13.525037       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:01:13.525165       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:01:13.619652       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:01:13.619726       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:01:13.642311       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:01:13.653409       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:01:13.653945       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:01:13.668205       1 config.go:200] "Starting service config controller"
	I1206 09:01:13.672419       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:01:13.668460       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:01:13.673249       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:01:13.668576       1 config.go:309] "Starting node config controller"
	I1206 09:01:13.673273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:01:13.673280       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:01:13.673577       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:01:13.673591       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:01:13.774062       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:01:13.774132       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:01:13.774188       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [addeaa3548cd] <==
	E1206 09:01:04.640884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:01:04.640891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 09:01:04.640940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:01:04.640992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:01:04.640997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 09:01:04.641008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:01:04.641067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:01:05.448301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 09:01:05.471655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 09:01:05.476748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 09:01:05.519278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1206 09:01:05.519427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 09:01:05.520063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 09:01:05.544570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 09:01:05.563744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 09:01:05.582100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 09:01:05.604260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 09:01:05.622415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 09:01:05.643993       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 09:01:05.691142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 09:01:05.740532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 09:01:05.750548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 09:01:05.752414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 09:01:05.767387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1206 09:01:08.537965       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:07:20 addons-397143 kubelet[2219]: E1206 09:07:20.019796    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:07:30 addons-397143 kubelet[2219]: E1206 09:07:30.020275    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7" podUID="b7c7ce96-4280-453f-9d10-2c364f13cce8"
	Dec 06 09:07:31 addons-397143 kubelet[2219]: I1206 09:07:31.018104    2219 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.368970    2219 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b7c7ce96-4280-453f-9d10-2c364f13cce8-script\") pod \"b7c7ce96-4280-453f-9d10-2c364f13cce8\" (UID: \"b7c7ce96-4280-453f-9d10-2c364f13cce8\") "
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.369025    2219 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b7c7ce96-4280-453f-9d10-2c364f13cce8-data\") pod \"b7c7ce96-4280-453f-9d10-2c364f13cce8\" (UID: \"b7c7ce96-4280-453f-9d10-2c364f13cce8\") "
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.369050    2219 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd7wk\" (UniqueName: \"kubernetes.io/projected/b7c7ce96-4280-453f-9d10-2c364f13cce8-kube-api-access-sd7wk\") pod \"b7c7ce96-4280-453f-9d10-2c364f13cce8\" (UID: \"b7c7ce96-4280-453f-9d10-2c364f13cce8\") "
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.369142    2219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7c7ce96-4280-453f-9d10-2c364f13cce8-data" (OuterVolumeSpecName: "data") pod "b7c7ce96-4280-453f-9d10-2c364f13cce8" (UID: "b7c7ce96-4280-453f-9d10-2c364f13cce8"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.369428    2219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c7ce96-4280-453f-9d10-2c364f13cce8-script" (OuterVolumeSpecName: "script") pod "b7c7ce96-4280-453f-9d10-2c364f13cce8" (UID: "b7c7ce96-4280-453f-9d10-2c364f13cce8"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.371158    2219 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c7ce96-4280-453f-9d10-2c364f13cce8-kube-api-access-sd7wk" (OuterVolumeSpecName: "kube-api-access-sd7wk") pod "b7c7ce96-4280-453f-9d10-2c364f13cce8" (UID: "b7c7ce96-4280-453f-9d10-2c364f13cce8"). InnerVolumeSpecName "kube-api-access-sd7wk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.469706    2219 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sd7wk\" (UniqueName: \"kubernetes.io/projected/b7c7ce96-4280-453f-9d10-2c364f13cce8-kube-api-access-sd7wk\") on node \"addons-397143\" DevicePath \"\""
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.469742    2219 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/b7c7ce96-4280-453f-9d10-2c364f13cce8-script\") on node \"addons-397143\" DevicePath \"\""
	Dec 06 09:07:32 addons-397143 kubelet[2219]: I1206 09:07:32.469751    2219 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/b7c7ce96-4280-453f-9d10-2c364f13cce8-data\") on node \"addons-397143\" DevicePath \"\""
	Dec 06 09:07:33 addons-397143 kubelet[2219]: I1206 09:07:33.026021    2219 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c7ce96-4280-453f-9d10-2c364f13cce8" path="/var/lib/kubelet/pods/b7c7ce96-4280-453f-9d10-2c364f13cce8/volumes"
	Dec 06 09:07:34 addons-397143 kubelet[2219]: E1206 09:07:34.019389    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:07:49 addons-397143 kubelet[2219]: E1206 09:07:49.019730    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: I1206 09:08:02.254813    2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/8db43445-3765-4849-9a81-d1ec4f306fa2-script\") pod \"helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7\" (UID: \"8db43445-3765-4849-9a81-d1ec4f306fa2\") " pod="local-path-storage/helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: I1206 09:08:02.254857    2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6x4n\" (UniqueName: \"kubernetes.io/projected/8db43445-3765-4849-9a81-d1ec4f306fa2-kube-api-access-x6x4n\") pod \"helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7\" (UID: \"8db43445-3765-4849-9a81-d1ec4f306fa2\") " pod="local-path-storage/helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: I1206 09:08:02.254879    2219 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/8db43445-3765-4849-9a81-d1ec4f306fa2-data\") pod \"helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7\" (UID: \"8db43445-3765-4849-9a81-d1ec4f306fa2\") " pod="local-path-storage/helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: E1206 09:08:02.683151    2219 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: E1206 09:08:02.683213    2219 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: E1206 09:08:02.683313    2219 kuberuntime_manager.go:1449] "Unhandled Error" err="container helper-pod start failed in pod helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7_local-path-storage(8db43445-3765-4849-9a81-d1ec4f306fa2): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: E1206 09:08:02.683361    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7" podUID="8db43445-3765-4849-9a81-d1ec4f306fa2"
	Dec 06 09:08:02 addons-397143 kubelet[2219]: E1206 09:08:02.966979    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7" podUID="8db43445-3765-4849-9a81-d1ec4f306fa2"
	Dec 06 09:08:03 addons-397143 kubelet[2219]: E1206 09:08:03.020319    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	Dec 06 09:08:15 addons-397143 kubelet[2219]: E1206 09:08:15.020441    2219 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="cdfe45e6-04be-4041-b2cb-1d4867877943"
	
	
	==> storage-provisioner [0f347014fdc7] <==
	W1206 09:07:52.647944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:54.650457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:54.654813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:56.657386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:56.660351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:58.663879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:07:58.667363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:00.670065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:00.674394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:02.677183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:02.680569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:04.683371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:04.687107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:06.690474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:06.694054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:08.697629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:08.701271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:10.704279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:10.708146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:12.710860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:12.715473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:14.718659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:14.722419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:16.725441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:08:16.728903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-397143 -n addons-397143
helpers_test.go:269: (dbg) Run:  kubectl --context addons-397143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6 helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-397143 describe pod nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6 helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-397143 describe pod nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6 helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7: exit status 1 (79.311485ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-397143/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:03:14 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.33
	IPs:
	  IP:  10.244.0.33
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vnp5t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vnp5t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m4s                  default-scheduler  Successfully assigned default/nginx to addons-397143
	  Normal   Pulling    2m15s (x5 over 5m3s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m15s (x5 over 5m3s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m15s (x5 over 5m3s)  kubelet            Error: ErrImagePull
	  Warning  Failed     69s (x15 over 5m3s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x20 over 5m3s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6479g (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-6479g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g7njc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lnnt6" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-397143 describe pod nginx test-local-path ingress-nginx-admission-create-g7njc ingress-nginx-admission-patch-lnnt6 helper-pod-create-pvc-53701ec8-e0db-4248-91b3-29676b2e82d7: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.66628548s)
--- FAIL: TestAddons/parallel/LocalPath (344.77s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (301.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059985 --alsologtostderr -v=1]
E1206 09:17:00.097247  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:17:27.799065  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059985 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059985 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059985 --alsologtostderr -v=1] stderr:
I1206 09:15:23.165988  615336 out.go:360] Setting OutFile to fd 1 ...
I1206 09:15:23.166282  615336 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:15:23.166293  615336 out.go:374] Setting ErrFile to fd 2...
I1206 09:15:23.166300  615336 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:15:23.166531  615336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:15:23.166809  615336 mustload.go:66] Loading cluster: functional-059985
I1206 09:15:23.167234  615336 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:15:23.167662  615336 cli_runner.go:164] Run: docker container inspect functional-059985 --format={{.State.Status}}
I1206 09:15:23.186194  615336 host.go:66] Checking if "functional-059985" exists ...
I1206 09:15:23.186502  615336 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 09:15:23.245957  615336 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:15:23.234757033 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1206 09:15:23.246111  615336 api_server.go:166] Checking apiserver status ...
I1206 09:15:23.246161  615336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:15:23.246217  615336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059985
I1206 09:15:23.263588  615336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-059985/id_rsa Username:docker}
I1206 09:15:23.365713  615336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/8934/cgroup
W1206 09:15:23.375074  615336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/8934/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 09:15:23.375142  615336 ssh_runner.go:195] Run: ls
I1206 09:15:23.379077  615336 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1206 09:15:23.384331  615336 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1206 09:15:23.384390  615336 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1206 09:15:23.384553  615336 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:15:23.384572  615336 addons.go:70] Setting dashboard=true in profile "functional-059985"
I1206 09:15:23.384582  615336 addons.go:239] Setting addon dashboard=true in "functional-059985"
I1206 09:15:23.384606  615336 host.go:66] Checking if "functional-059985" exists ...
I1206 09:15:23.384962  615336 cli_runner.go:164] Run: docker container inspect functional-059985 --format={{.State.Status}}
I1206 09:15:23.405655  615336 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1206 09:15:23.406963  615336 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1206 09:15:23.407964  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1206 09:15:23.407989  615336 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1206 09:15:23.408112  615336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059985
I1206 09:15:23.426748  615336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-059985/id_rsa Username:docker}
I1206 09:15:23.528007  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1206 09:15:23.528032  615336 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1206 09:15:23.541081  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1206 09:15:23.541106  615336 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1206 09:15:23.553976  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1206 09:15:23.554006  615336 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1206 09:15:23.566830  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1206 09:15:23.566864  615336 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1206 09:15:23.579530  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1206 09:15:23.579554  615336 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1206 09:15:23.593268  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1206 09:15:23.593289  615336 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1206 09:15:23.606221  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1206 09:15:23.606249  615336 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1206 09:15:23.618897  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1206 09:15:23.618941  615336 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1206 09:15:23.631457  615336 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:15:23.631484  615336 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1206 09:15:23.643870  615336 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:15:24.084587  615336 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-059985 addons enable metrics-server

                                                
                                                
I1206 09:15:24.085585  615336 addons.go:202] Writing out "functional-059985" config to set dashboard=true...
W1206 09:15:24.085812  615336 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1206 09:15:24.086438  615336 kapi.go:59] client config for functional-059985: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.key", CAFile:"/home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1206 09:15:24.086892  615336 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1206 09:15:24.086924  615336 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1206 09:15:24.086933  615336 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1206 09:15:24.086939  615336 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1206 09:15:24.086946  615336 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1206 09:15:24.094406  615336 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  ac3c0389-94d5-4f07-9062-1cad53b11c88 934 0 2025-12-06 09:15:24 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-06 09:15:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.97.141.231,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.97.141.231],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1206 09:15:24.094541  615336 out.go:285] * Launching proxy ...
* Launching proxy ...
I1206 09:15:24.094605  615336 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-059985 proxy --port 36195]
I1206 09:15:24.094904  615336 dashboard.go:159] Waiting for kubectl to output host:port ...
I1206 09:15:24.139957  615336 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1206 09:15:24.140053  615336 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1206 09:15:24.147904  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5fe238b-7f1a-44a7-bbec-8f512ced31e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007ce7c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002092c0 TLS:<nil>}
I1206 09:15:24.147991  615336 retry.go:31] will retry after 50.367µs: Temporary Error: unexpected response code: 503
I1206 09:15:24.151272  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[84cf2cae-a64a-426d-85cd-b239d3d9f86d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007ce880 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150140 TLS:<nil>}
I1206 09:15:24.151325  615336 retry.go:31] will retry after 154.768µs: Temporary Error: unexpected response code: 503
I1206 09:15:24.154354  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5be12420-debf-47a9-94fe-f89c7b92a637] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0017762c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150280 TLS:<nil>}
I1206 09:15:24.154421  615336 retry.go:31] will retry after 293.881µs: Temporary Error: unexpected response code: 503
I1206 09:15:24.157411  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[942a5c53-8535-4332-afcd-933e70b0acd2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc00057ae00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209400 TLS:<nil>}
I1206 09:15:24.157465  615336 retry.go:31] will retry after 285.41µs: Temporary Error: unexpected response code: 503
I1206 09:15:24.160746  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4ba12bca-879f-413a-b809-72ef539d8c3c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0017763c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bcf00 TLS:<nil>}
I1206 09:15:24.160786  615336 retry.go:31] will retry after 392.421µs: Temporary Error: unexpected response code: 503
I1206 09:15:24.163718  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d60abc1-6a5f-4bd8-b417-6faf429b52d9] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc00057af00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209540 TLS:<nil>}
I1206 09:15:24.163758  615336 retry.go:31] will retry after 492.662µs: Temporary Error: unexpected response code: 503
I1206 09:15:24.166702  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b0bad224-46bd-4f58-85cb-4d41446e5aa2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0017764c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bd180 TLS:<nil>}
I1206 09:15:24.166747  615336 retry.go:31] will retry after 1.42093ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.171101  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ccce9cac-f777-4c3e-b30b-44caec48790d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007ce9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209680 TLS:<nil>}
I1206 09:15:24.171154  615336 retry.go:31] will retry after 1.33613ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.175421  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[297e602b-e9a0-4b11-b44d-c48a510fb999] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc00057afc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001503c0 TLS:<nil>}
I1206 09:15:24.175463  615336 retry.go:31] will retry after 3.266793ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.181667  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[559ba2fe-912b-4cbe-8606-c8288ccadca4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007ceac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bd2c0 TLS:<nil>}
I1206 09:15:24.181714  615336 retry.go:31] will retry after 2.434531ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.187170  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e6c03e31-6d0a-4932-93c9-89f552302feb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0017765c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150500 TLS:<nil>}
I1206 09:15:24.187224  615336 retry.go:31] will retry after 4.872482ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.194520  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4c3142d7-b2d6-4822-b532-bb4abafedb8a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc00057b100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002097c0 TLS:<nil>}
I1206 09:15:24.194561  615336 retry.go:31] will retry after 10.989022ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.208494  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4f532876-2db8-4e8f-9b28-d658b55ec710] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007cebc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bd400 TLS:<nil>}
I1206 09:15:24.208557  615336 retry.go:31] will retry after 10.036485ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.221603  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c74fa389-c7d5-4f78-b554-9ae84447c8fa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc00057b1c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001508c0 TLS:<nil>}
I1206 09:15:24.221688  615336 retry.go:31] will retry after 19.891345ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.245465  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c1bba071-2390-4c3c-8742-c2825d013d6a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007cec80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bd680 TLS:<nil>}
I1206 09:15:24.245551  615336 retry.go:31] will retry after 37.683868ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.286870  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d190e97f-9e08-4604-9d83-302630f056cd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc00057b280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000150b40 TLS:<nil>}
I1206 09:15:24.286968  615336 retry.go:31] will retry after 50.153807ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.340779  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7596713-25f7-4f80-b354-e25894aa0b57] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc001776780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bd7c0 TLS:<nil>}
I1206 09:15:24.340861  615336 retry.go:31] will retry after 78.166136ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.423209  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92a9875c-b505-4187-ad3e-34d087f7d00b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc00057b380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209900 TLS:<nil>}
I1206 09:15:24.423303  615336 retry.go:31] will retry after 56.484244ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.483751  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[267c612e-ffc1-4dbf-b148-95da50228274] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc001776840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bd900 TLS:<nil>}
I1206 09:15:24.483824  615336 retry.go:31] will retry after 91.94295ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.579187  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2d1548c-4f8e-48e0-966e-6712581d63b3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007ced80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000209a40 TLS:<nil>}
I1206 09:15:24.579255  615336 retry.go:31] will retry after 217.014647ms: Temporary Error: unexpected response code: 503
I1206 09:15:24.799889  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c88c4585-5d30-4be7-94f5-f80d7459cb05] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:24 GMT]] Body:0xc0007976c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001512c0 TLS:<nil>}
I1206 09:15:24.800042  615336 retry.go:31] will retry after 434.371138ms: Temporary Error: unexpected response code: 503
I1206 09:15:25.237579  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d95ad58f-4909-41fc-b80e-dbcb2416c20f] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:25 GMT]] Body:0xc00057b500 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d63c0 TLS:<nil>}
I1206 09:15:25.237645  615336 retry.go:31] will retry after 366.955745ms: Temporary Error: unexpected response code: 503
I1206 09:15:25.608518  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a513df91-e831-48cd-94b3-1df1c4949b06] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:25 GMT]] Body:0xc00057b5c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bda40 TLS:<nil>}
I1206 09:15:25.608611  615336 retry.go:31] will retry after 577.880685ms: Temporary Error: unexpected response code: 503
I1206 09:15:26.190691  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6e123c1a-fdf3-415b-8b40-09594c49c6eb] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:26 GMT]] Body:0xc0007ceec0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bdb80 TLS:<nil>}
I1206 09:15:26.190883  615336 retry.go:31] will retry after 903.24929ms: Temporary Error: unexpected response code: 503
I1206 09:15:27.097828  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2316ec88-839d-4ea8-98b1-13d8f1ca0088] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:27 GMT]] Body:0xc000797780 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000151400 TLS:<nil>}
I1206 09:15:27.097894  615336 retry.go:31] will retry after 2.452231382s: Temporary Error: unexpected response code: 503
I1206 09:15:29.554338  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[09c14f33-b119-4a1e-be92-efe62bf0fea0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:29 GMT]] Body:0xc00057b700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d6500 TLS:<nil>}
I1206 09:15:29.554402  615336 retry.go:31] will retry after 3.44874176s: Temporary Error: unexpected response code: 503
I1206 09:15:33.009185  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b69d4fa9-7f9a-437e-be54-308a5aa2531a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:33 GMT]] Body:0xc00184c040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d6780 TLS:<nil>}
I1206 09:15:33.009258  615336 retry.go:31] will retry after 5.18939243s: Temporary Error: unexpected response code: 503
I1206 09:15:38.202651  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[89fd15ee-4dfc-440c-a235-6b06345e34a7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:38 GMT]] Body:0xc0007cefc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bdcc0 TLS:<nil>}
I1206 09:15:38.202716  615336 retry.go:31] will retry after 5.269239515s: Temporary Error: unexpected response code: 503
I1206 09:15:43.474826  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[126fc2ee-a6fc-4cf0-b0b4-556671aa1bd2] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:43 GMT]] Body:0xc00184c100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000151680 TLS:<nil>}
I1206 09:15:43.474892  615336 retry.go:31] will retry after 10.39316134s: Temporary Error: unexpected response code: 503
I1206 09:15:53.871116  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ffc971af-2d4c-4adf-be3b-c5f04e6c3da0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:15:53 GMT]] Body:0xc00184c180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d68c0 TLS:<nil>}
I1206 09:15:53.871189  615336 retry.go:31] will retry after 8.658533757s: Temporary Error: unexpected response code: 503
I1206 09:16:02.536775  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43d0a35d-5654-4d29-86f3-f37ca0dd099a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:16:02 GMT]] Body:0xc000797940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bde00 TLS:<nil>}
I1206 09:16:02.536846  615336 retry.go:31] will retry after 22.922432508s: Temporary Error: unexpected response code: 503
I1206 09:16:25.463266  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[dcda5bb7-ea3e-486c-b161-44c744e76bdc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:16:25 GMT]] Body:0xc0007979c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001850000 TLS:<nil>}
I1206 09:16:25.463340  615336 retry.go:31] will retry after 32.373858573s: Temporary Error: unexpected response code: 503
I1206 09:16:57.843249  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fd5cff72-cd6b-4e0f-9592-6e0ac77b1826] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:16:57 GMT]] Body:0xc000797a40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001517c0 TLS:<nil>}
I1206 09:16:57.843327  615336 retry.go:31] will retry after 49.255871041s: Temporary Error: unexpected response code: 503
I1206 09:17:47.102667  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e63b6133-0305-42d4-8480-94eb2fe084b1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:17:47 GMT]] Body:0xc00184c040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00036a000 TLS:<nil>}
I1206 09:17:47.102764  615336 retry.go:31] will retry after 1m13.91159791s: Temporary Error: unexpected response code: 503
I1206 09:19:01.020146  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd17cd72-85e3-482f-a8d5-448013f71e4c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:19:01 GMT]] Body:0xc00184c0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001850140 TLS:<nil>}
I1206 09:19:01.020217  615336 retry.go:31] will retry after 59.674138825s: Temporary Error: unexpected response code: 503
I1206 09:20:00.698607  615336 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a1f6238-964d-4cad-86db-754fbf317c6b] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 06 Dec 2025 09:20:00 GMT]] Body:0xc000b88100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001850280 TLS:<nil>}
I1206 09:20:00.698696  615336 retry.go:31] will retry after 1m3.58343013s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-059985
helpers_test.go:243: (dbg) docker inspect functional-059985:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1",
	        "Created": "2025-12-06T09:12:25.761854111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 595684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:12:25.79611275Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/hosts",
	        "LogPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1-json.log",
	        "Name": "/functional-059985",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-059985:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-059985",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1",
	                "LowerDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-059985",
	                "Source": "/var/lib/docker/volumes/functional-059985/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-059985",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-059985",
	                "name.minikube.sigs.k8s.io": "functional-059985",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "639f0d64dc493dc7d45e24f95d75b88019e64dcc87ad68bb60f67d1c1c02731c",
	            "SandboxKey": "/var/run/docker/netns/639f0d64dc49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-059985": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "196fcf3b074f568c2e7586daa1ecef14111229ee5ba62bcd741749a967fdc6f6",
	                    "EndpointID": "416d03b943ae896e32254e021ca2971c63b8e6f933f20cf6d6a38c64029f3545",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:d4:ba:1c:b0:0d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-059985",
	                        "28465384520d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-059985 -n functional-059985
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-059985 logs -n 25: (1.01107193s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-059985 /tmp/TestFunctionalparallelMountCmdany-port2665761603/001:/mount-9p --alsologtostderr -v=1                   │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ ssh       │ functional-059985 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ ssh       │ functional-059985 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh -- ls -la /mount-9p                                                                                         │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh cat /mount-9p/test-1765012511224140653                                                                      │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh stat /mount-9p/created-by-test                                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh stat /mount-9p/created-by-pod                                                                               │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh sudo umount -f /mount-9p                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ mount     │ -p functional-059985 /tmp/TestFunctionalparallelMountCmdspecific-port2064617473/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ ssh       │ functional-059985 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh -- ls -la /mount-9p                                                                                         │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh sudo umount -f /mount-9p                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ ssh       │ functional-059985 ssh findmnt -T /mount1                                                                                          │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ mount     │ -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount1 --alsologtostderr -v=1                │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ mount     │ -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount2 --alsologtostderr -v=1                │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ mount     │ -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount3 --alsologtostderr -v=1                │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ ssh       │ functional-059985 ssh findmnt -T /mount1                                                                                          │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh findmnt -T /mount2                                                                                          │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ ssh       │ functional-059985 ssh findmnt -T /mount3                                                                                          │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │ 06 Dec 25 09:15 UTC │
	│ mount     │ -p functional-059985 --kill=true                                                                                                  │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ start     │ -p functional-059985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                       │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ start     │ -p functional-059985 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ start     │ -p functional-059985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                       │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-059985 --alsologtostderr -v=1                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:15 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:15:22
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:15:22.996535  615253 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:15:22.996811  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.996821  615253 out.go:374] Setting ErrFile to fd 2...
	I1206 09:15:22.996825  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.997221  615253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:15:22.997704  615253 out.go:368] Setting JSON to false
	I1206 09:15:22.998815  615253 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7070,"bootTime":1765005453,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:15:22.998893  615253 start.go:143] virtualization: kvm guest
	I1206 09:15:23.000676  615253 out.go:179] * [functional-059985] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:15:23.002295  615253 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:15:23.002335  615253 notify.go:221] Checking for updates...
	I1206 09:15:23.004817  615253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:15:23.006160  615253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:15:23.007393  615253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:15:23.008584  615253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:15:23.009742  615253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:15:23.011225  615253 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:15:23.011768  615253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:15:23.036100  615253 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:15:23.036221  615253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:15:23.094472  615253 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:15:23.08480577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:15:23.094606  615253 docker.go:319] overlay module found
	I1206 09:15:23.096284  615253 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:15:23.097259  615253 start.go:309] selected driver: docker
	I1206 09:15:23.097272  615253 start.go:927] validating driver "docker" against &{Name:functional-059985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-059985 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:15:23.097371  615253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:15:23.098949  615253 out.go:203] 
	W1206 09:15:23.100067  615253 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:15:23.101109  615253 out.go:203] 
	
	
	==> Docker <==
	Dec 06 09:15:38 functional-059985 dockerd[7053]: time="2025-12-06T09:15:38.243694250Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:15:38 functional-059985 dockerd[7053]: time="2025-12-06T09:15:38.271236450Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:15:40 functional-059985 dockerd[7053]: time="2025-12-06T09:15:40.243723400Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:15:40 functional-059985 dockerd[7053]: time="2025-12-06T09:15:40.276076693Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:02 functional-059985 dockerd[7053]: time="2025-12-06T09:16:02.248883391Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:16:02 functional-059985 dockerd[7053]: time="2025-12-06T09:16:02.285089224Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:09 functional-059985 dockerd[7053]: time="2025-12-06T09:16:09.246889134Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:16:09 functional-059985 dockerd[7053]: time="2025-12-06T09:16:09.280398294Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:11 functional-059985 dockerd[7053]: time="2025-12-06T09:16:11.326747313Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:20 functional-059985 dockerd[7053]: time="2025-12-06T09:16:20.341888557Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:30 functional-059985 dockerd[7053]: time="2025-12-06T09:16:30.324229450Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:45 functional-059985 dockerd[7053]: time="2025-12-06T09:16:45.248830981Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:16:45 functional-059985 dockerd[7053]: time="2025-12-06T09:16:45.369872506Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:45 functional-059985 cri-dockerd[7453]: time="2025-12-06T09:16:45Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 06 09:16:50 functional-059985 dockerd[7053]: time="2025-12-06T09:16:50.246879785Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:16:50 functional-059985 dockerd[7053]: time="2025-12-06T09:16:50.280881237Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:17:32 functional-059985 dockerd[7053]: time="2025-12-06T09:17:32.390055429Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:17:54 functional-059985 dockerd[7053]: time="2025-12-06T09:17:54.325269997Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:04 functional-059985 dockerd[7053]: time="2025-12-06T09:18:04.324223249Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:18 functional-059985 dockerd[7053]: time="2025-12-06T09:18:18.246363413Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:18:18 functional-059985 dockerd[7053]: time="2025-12-06T09:18:18.282822570Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:20 functional-059985 dockerd[7053]: time="2025-12-06T09:18:20.244688470Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:18:20 functional-059985 dockerd[7053]: time="2025-12-06T09:18:20.274279035Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:20 functional-059985 dockerd[7053]: time="2025-12-06T09:20:20.390075065Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:20 functional-059985 cri-dockerd[7453]: time="2025-12-06T09:20:20Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b29dca31e0b71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   22e1b9c3e3d7f       busybox-mount                               default
	3109ba1ff0eb3       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   bdbb36766496a       hello-node-connect-7d85dfc575-grhpg         default
	553d8db6419ec       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   c3d421dee4c45       hello-node-75c85bcc94-ld9pq                 default
	b7db64a6cbcce       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   2                   f8c1ccfd90716       coredns-66bc5c9577-vxhg7                    kube-system
	6ab49dd2f0ca2       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   c3e6c73344539       storage-provisioner                         kube-system
	0b137720b9a5f       8aa150647e88a                                                                                         5 minutes ago       Running             kube-proxy                2                   2757a4590d002       kube-proxy-v6ctp                            kube-system
	c9adc92dea8d0       a3e246e9556e9                                                                                         5 minutes ago       Running             etcd                      2                   0300a94259bfe       etcd-functional-059985                      kube-system
	9b2ad3a336b2a       88320b5498ff2                                                                                         5 minutes ago       Running             kube-scheduler            2                   939eac1060257       kube-scheduler-functional-059985            kube-system
	0351e6452c14b       a5f569d49a979                                                                                         5 minutes ago       Running             kube-apiserver            0                   d680ffc821477       kube-apiserver-functional-059985            kube-system
	473a7eb2d2899       01e8bacf0f500                                                                                         5 minutes ago       Running             kube-controller-manager   2                   5ba8bcac57036       kube-controller-manager-functional-059985   kube-system
	aa5173745fbac       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   5d0d6464edc32       storage-provisioner                         kube-system
	5626dcb5dc256       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   60bd207fbe42f       coredns-66bc5c9577-vxhg7                    kube-system
	0142d84dfcc3e       01e8bacf0f500                                                                                         6 minutes ago       Exited              kube-controller-manager   1                   cd62f004aeb97       kube-controller-manager-functional-059985   kube-system
	9ed47e1ea713d       88320b5498ff2                                                                                         6 minutes ago       Exited              kube-scheduler            1                   42e6bcee13788       kube-scheduler-functional-059985            kube-system
	50154fcfe42bc       a3e246e9556e9                                                                                         6 minutes ago       Exited              etcd                      1                   568bbcf5b2a5a       etcd-functional-059985                      kube-system
	58a88122473ec       8aa150647e88a                                                                                         6 minutes ago       Exited              kube-proxy                1                   af3f271000a2c       kube-proxy-v6ctp                            kube-system
	
	
	==> coredns [5626dcb5dc25] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41874 - 41765 "HINFO IN 4677883846086172525.3053560101530399949. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.424707544s
	
	
	==> coredns [b7db64a6cbcc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55932 - 63859 "HINFO IN 4009189935978222900.6966185930985369349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022692491s
	
	
	==> describe nodes <==
	Name:               functional-059985
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-059985
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-059985
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_12_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-059985
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:20:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:15:33 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:15:33 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:15:33 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:15:33 +0000   Sat, 06 Dec 2025 09:12:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-059985
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ba47cce0-29a0-432c-b2a4-36f42ef3f157
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-ld9pq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  default                     hello-node-connect-7d85dfc575-grhpg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  default                     mysql-5bb876957f-m9cm6                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m35s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 coredns-66bc5c9577-vxhg7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m36s
	  kube-system                 etcd-functional-059985                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m41s
	  kube-system                 kube-apiserver-functional-059985              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 kube-controller-manager-functional-059985     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m41s
	  kube-system                 kube-proxy-v6ctp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	  kube-system                 kube-scheduler-functional-059985              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m35s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vrsdw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kjcb6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m34s                  kube-proxy       
	  Normal   Starting                 5m51s                  kube-proxy       
	  Normal   Starting                 6m36s                  kube-proxy       
	  Normal   Starting                 7m46s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  7m46s (x8 over 7m46s)  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m46s (x8 over 7m46s)  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m46s (x7 over 7m46s)  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  7m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m41s                  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  7m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    7m41s                  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m41s                  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m41s                  kubelet          Starting kubelet.
	  Normal   NodeReady                7m40s                  kubelet          Node functional-059985 status is now: NodeReady
	  Normal   RegisteredNode           7m37s                  node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	  Warning  ContainerGCFailed        6m41s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	  Normal   RegisteredNode           6m35s                  node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	  Normal   Starting                 5m55s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5m49s                  node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 8a 6f 1a 22 ad 40 08 06
	[  +0.251239] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0e a7 c4 94 ef 5a 08 06
	[  +0.431184] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 41 2d 73 86 ed 08 06
	[  +0.515220] IPv4: martian source 10.244.0.8 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.026943] IPv4: martian source 10.244.0.8 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[Dec 6 09:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 81 03 c8 c4 0c 08 06
	[Dec 6 09:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 ce 4a 36 be 39 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 6e 96 20 7e 61 08 06
	
	
	==> etcd [50154fcfe42b] <==
	{"level":"warn","ts":"2025-12-06T09:13:45.463799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.474165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.481169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.488080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.495192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.502198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.509805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.518679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.525822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.540117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.548217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.557765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.564644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.571437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.579729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.586860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.595028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.602823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.611318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.619029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.638296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.645267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.653542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.660393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.709308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	
	
	==> etcd [c9adc92dea8d] <==
	{"level":"warn","ts":"2025-12-06T09:14:31.308336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.314852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.324098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.331172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.338201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.345838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.352739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.367132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.374191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.388082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.394438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.400942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.407869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.415789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.422814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.429823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.436647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.444223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.451799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.458882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.480324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.484097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.491808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.498307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.539312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53492","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:20:24 up  2:02,  0 user,  load average: 0.06, 0.65, 1.26
	Linux functional-059985 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [0351e6452c14] <==
	I1206 09:14:32.002986       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:14:32.003058       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:14:32.003004       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:14:32.003065       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:14:32.007830       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1206 09:14:32.009314       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:14:32.048749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:14:32.050828       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:14:32.258547       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:14:32.905810       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:14:33.384169       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:14:33.418862       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:14:33.449392       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:14:33.458350       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:14:35.448971       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:14:35.650513       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:14:35.749644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:14:45.456316       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.120.143"}
	I1206 09:14:49.794043       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.127.39"}
	I1206 09:14:49.818443       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.107.147"}
	I1206 09:14:51.393889       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.122.58"}
	I1206 09:15:00.202599       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.219.219"}
	I1206 09:15:23.954224       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:15:24.067722       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.141.231"}
	I1206 09:15:24.077799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.99.20"}
	
	
	==> kube-controller-manager [0142d84dfcc3] <==
	I1206 09:13:49.704844       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:13:49.704821       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:13:49.704860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:13:49.704853       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:13:49.717647       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:49.719806       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:13:49.722112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:13:49.724377       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:13:49.726628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:13:49.731943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:13:49.731958       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:13:49.731964       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:13:49.753720       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:13:49.753748       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:13:49.753778       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:13:49.753812       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:13:49.753848       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:13:49.753849       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:13:49.754948       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:13:49.754966       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:13:49.754998       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:13:49.757287       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:13:49.757330       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:49.758526       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:13:49.775995       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [473a7eb2d289] <==
	I1206 09:14:35.270847       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:14:35.281157       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:14:35.288628       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:14:35.296517       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:14:35.296563       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:14:35.296619       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:14:35.296703       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:14:35.296723       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:14:35.296765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:14:35.296793       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:14:35.296805       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:14:35.296728       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:14:35.300953       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:14:35.310163       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:14:35.310394       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:14:35.310514       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-059985"
	I1206 09:14:35.310586       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:14:35.313498       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:14:35.358814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 09:15:24.003061       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.006722       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.009652       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.011680       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.012828       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.017905       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0b137720b9a5] <==
	I1206 09:14:32.829317       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:14:32.892872       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:14:32.993670       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:14:32.993735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:14:32.993860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:14:33.022817       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:14:33.022898       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:14:33.029718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:14:33.030143       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:14:33.030175       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:14:33.031795       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:14:33.031864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:14:33.031804       1 config.go:200] "Starting service config controller"
	I1206 09:14:33.032094       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:14:33.031831       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:14:33.032130       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:14:33.032680       1 config.go:309] "Starting node config controller"
	I1206 09:14:33.035783       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:14:33.035984       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:14:33.132294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:14:33.132328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:14:33.132354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [58a88122473e] <==
	I1206 09:13:44.331690       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:13:44.408322       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1206 09:13:46.115388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-059985\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 09:13:47.208549       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:13:47.208598       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:13:47.208735       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:13:47.241427       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:13:47.241497       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:13:47.249242       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:13:47.249765       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:13:47.249807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:47.251432       1 config.go:309] "Starting node config controller"
	I1206 09:13:47.251457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:13:47.251643       1 config.go:200] "Starting service config controller"
	I1206 09:13:47.251657       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:13:47.251675       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:13:47.251679       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:13:47.251695       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:13:47.251700       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:13:47.352267       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:13:47.352389       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:13:47.352401       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:13:47.352419       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9b2ad3a336b2] <==
	I1206 09:14:30.743179       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:14:31.945119       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:14:31.945257       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:14:31.945319       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:14:31.945353       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:14:31.967190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:14:31.967216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:14:31.968983       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:14:31.969033       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:14:31.969341       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:14:31.969379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:14:32.069208       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9ed47e1ea713] <==
	I1206 09:13:45.496447       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:13:46.109389       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:13:46.109495       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:13:46.109527       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:13:46.109551       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:13:46.132640       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:13:46.132753       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:46.141991       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:13:46.142035       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:13:46.142546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:13:46.142747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:13:46.242419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:19:29 functional-059985 kubelet[8513]: E1206 09:19:29.226586    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:19:32 functional-059985 kubelet[8513]: E1206 09:19:32.227938    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:19:33 functional-059985 kubelet[8513]: E1206 09:19:33.227746    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:19:36 functional-059985 kubelet[8513]: E1206 09:19:36.227832    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:19:37 functional-059985 kubelet[8513]: E1206 09:19:37.228309    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:19:43 functional-059985 kubelet[8513]: E1206 09:19:43.226232    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:19:43 functional-059985 kubelet[8513]: E1206 09:19:43.227853    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:19:48 functional-059985 kubelet[8513]: E1206 09:19:48.228145    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:19:48 functional-059985 kubelet[8513]: E1206 09:19:48.228233    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:19:49 functional-059985 kubelet[8513]: E1206 09:19:49.228843    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:19:55 functional-059985 kubelet[8513]: E1206 09:19:55.226532    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:19:55 functional-059985 kubelet[8513]: E1206 09:19:55.228718    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:20:00 functional-059985 kubelet[8513]: E1206 09:20:00.228387    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:20:03 functional-059985 kubelet[8513]: E1206 09:20:03.234808    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:20:03 functional-059985 kubelet[8513]: E1206 09:20:03.234809    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:20:06 functional-059985 kubelet[8513]: E1206 09:20:06.228362    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:20:07 functional-059985 kubelet[8513]: E1206 09:20:07.226343    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:20:13 functional-059985 kubelet[8513]: E1206 09:20:13.228615    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:20:15 functional-059985 kubelet[8513]: E1206 09:20:15.228215    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:20:16 functional-059985 kubelet[8513]: E1206 09:20:16.228115    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392446    8513 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392505    8513 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392591    8513 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(7ca1a0b0-3356-4391-88ae-3a31e43c8a5d): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392620    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:20:21 functional-059985 kubelet[8513]: E1206 09:20:21.225783    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	
	
	==> storage-provisioner [6ab49dd2f0ca] <==
	W1206 09:19:59.406124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:01.409468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:01.413545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:03.416894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:03.421868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:05.425305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:05.429221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:07.431926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:07.435815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:09.439613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:09.444304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:11.447452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:11.451591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:13.454814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:13.459514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:15.462842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:15.466600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:17.470194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:17.475084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:19.478484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:19.482162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:21.484792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:21.489947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:23.493725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:23.498946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [aa5173745fba] <==
	I1206 09:13:56.306512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:13:56.314754       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:13:56.314789       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:13:56.317036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:13:59.771858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:04.031955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:07.631138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:10.685130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.707204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.712167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:14:13.712328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:14:13.712496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5!
	I1206 09:14:13.712508       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5608dec-0e94-4ab1-bb57-edede591ea22", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5 became leader
	W1206 09:14:13.714471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.720248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:14:13.812832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5!
	W1206 09:14:15.723800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:15.728402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:17.731603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:17.736896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:19.740106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:19.744502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:21.747960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:21.752249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-059985 -n functional-059985
helpers_test.go:269: (dbg) Run:  kubectl --context functional-059985 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6: exit status 1 (94.542448ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:15:13 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b29dca31e0b7194c528fe0f7d691fa9a293fd3658f5220333a7846f0a0ee5d13
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:15:15 +0000
	      Finished:     Sat, 06 Dec 2025 09:15:15 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b65rl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b65rl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m11s  default-scheduler  Successfully assigned default/busybox-mount to functional-059985
	  Normal  Pulling    5m12s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m10s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.485s (1.485s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m10s  kubelet            Created container: mount-munger
	  Normal  Started    5m10s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-m9cm6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:49 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxxf9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxxf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m35s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-m9cm6 to functional-059985
	  Warning  Failed     5m35s                  kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m31s (x5 over 5m35s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m31s (x5 over 5m35s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m31s (x4 over 5m20s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    22s (x21 over 5m34s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     22s (x21 over 5m34s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:51 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jv26d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jv26d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m33s                  default-scheduler  Successfully assigned default/nginx-svc to functional-059985
	  Normal   Pulling    2m53s (x5 over 5m34s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m53s (x5 over 5m33s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m53s (x5 over 5m33s)  kubelet            Error: ErrImagePull
	  Warning  Failed     30s (x20 over 5m33s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    19s (x21 over 5m33s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:56 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rck6k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-rck6k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m28s                  default-scheduler  Successfully assigned default/sp-pod to functional-059985
	  Normal   Pulling    2m21s (x5 over 5m28s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m21s (x5 over 5m28s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m21s (x5 over 5m28s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    18s (x21 over 5m28s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     18s (x21 over 5m28s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vrsdw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kjcb6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.97s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (367.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8da40c66-b4fd-4655-9174-0b485ccd3bfe] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003818255s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-059985 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-059985 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-059985 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-059985 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:14:56.992240  558759 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [54c1aaaf-c72c-4af5-9ce5-4543673d5a2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-059985 -n functional-059985
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-06 09:20:57.315949982 +0000 UTC m=+1243.022589645
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-059985 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-059985 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-059985/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:14:56 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rck6k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-rck6k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  6m                  default-scheduler  Successfully assigned default/sp-pod to functional-059985
Normal   Pulling    2m53s (x5 over 6m)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m53s (x5 over 6m)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m53s (x5 over 6m)  kubelet            Error: ErrImagePull
Normal   BackOff    50s (x21 over 6m)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     50s (x21 over 6m)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-059985 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-059985 logs sp-pod -n default: exit status 1 (68.441498ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-059985 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-059985
helpers_test.go:243: (dbg) docker inspect functional-059985:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1",
	        "Created": "2025-12-06T09:12:25.761854111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 595684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:12:25.79611275Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/hosts",
	        "LogPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1-json.log",
	        "Name": "/functional-059985",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-059985:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-059985",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1",
	                "LowerDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-059985",
	                "Source": "/var/lib/docker/volumes/functional-059985/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-059985",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-059985",
	                "name.minikube.sigs.k8s.io": "functional-059985",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "639f0d64dc493dc7d45e24f95d75b88019e64dcc87ad68bb60f67d1c1c02731c",
	            "SandboxKey": "/var/run/docker/netns/639f0d64dc49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-059985": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "196fcf3b074f568c2e7586daa1ecef14111229ee5ba62bcd741749a967fdc6f6",
	                    "EndpointID": "416d03b943ae896e32254e021ca2971c63b8e6f933f20cf6d6a38c64029f3545",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:d4:ba:1c:b0:0d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-059985",
	                        "28465384520d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-059985 -n functional-059985
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-059985 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                   │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ ssh            │ functional-059985 ssh sudo systemctl is-active crio                                                                                                        │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │                     │
	│ image          │ functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image save kicbase/echo-server:functional-059985 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image rm kicbase/echo-server:functional-059985 --alsologtostderr                                                                         │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image save --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ ssh            │ functional-059985 ssh sudo cat /etc/test/nested/copy/558759/hosts                                                                                          │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ update-context │ functional-059985 update-context --alsologtostderr -v=2                                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ update-context │ functional-059985 update-context --alsologtostderr -v=2                                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ update-context │ functional-059985 update-context --alsologtostderr -v=2                                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format short --alsologtostderr                                                                                                │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format yaml --alsologtostderr                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ ssh            │ functional-059985 ssh pgrep buildkitd                                                                                                                      │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │                     │
	│ image          │ functional-059985 image build -t localhost/my-image:functional-059985 testdata/build --alsologtostderr                                                     │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format json --alsologtostderr                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format table --alsologtostderr                                                                                                │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:15:22
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:15:22.996535  615253 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:15:22.996811  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.996821  615253 out.go:374] Setting ErrFile to fd 2...
	I1206 09:15:22.996825  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.997221  615253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:15:22.997704  615253 out.go:368] Setting JSON to false
	I1206 09:15:22.998815  615253 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7070,"bootTime":1765005453,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:15:22.998893  615253 start.go:143] virtualization: kvm guest
	I1206 09:15:23.000676  615253 out.go:179] * [functional-059985] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:15:23.002295  615253 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:15:23.002335  615253 notify.go:221] Checking for updates...
	I1206 09:15:23.004817  615253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:15:23.006160  615253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:15:23.007393  615253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:15:23.008584  615253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:15:23.009742  615253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:15:23.011225  615253 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:15:23.011768  615253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:15:23.036100  615253 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:15:23.036221  615253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:15:23.094472  615253 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:15:23.08480577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:15:23.094606  615253 docker.go:319] overlay module found
	I1206 09:15:23.096284  615253 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:15:23.097259  615253 start.go:309] selected driver: docker
	I1206 09:15:23.097272  615253 start.go:927] validating driver "docker" against &{Name:functional-059985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-059985 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:15:23.097371  615253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:15:23.098949  615253 out.go:203] 
	W1206 09:15:23.100067  615253 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:15:23.101109  615253 out.go:203] 
	
	
	==> Docker <==
	Dec 06 09:16:02 functional-059985 dockerd[7053]: time="2025-12-06T09:16:02.248883391Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:16:02 functional-059985 dockerd[7053]: time="2025-12-06T09:16:02.285089224Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:09 functional-059985 dockerd[7053]: time="2025-12-06T09:16:09.246889134Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:16:09 functional-059985 dockerd[7053]: time="2025-12-06T09:16:09.280398294Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:11 functional-059985 dockerd[7053]: time="2025-12-06T09:16:11.326747313Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:20 functional-059985 dockerd[7053]: time="2025-12-06T09:16:20.341888557Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:30 functional-059985 dockerd[7053]: time="2025-12-06T09:16:30.324229450Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:45 functional-059985 dockerd[7053]: time="2025-12-06T09:16:45.248830981Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:16:45 functional-059985 dockerd[7053]: time="2025-12-06T09:16:45.369872506Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:45 functional-059985 cri-dockerd[7453]: time="2025-12-06T09:16:45Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 06 09:16:50 functional-059985 dockerd[7053]: time="2025-12-06T09:16:50.246879785Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:16:50 functional-059985 dockerd[7053]: time="2025-12-06T09:16:50.280881237Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:17:32 functional-059985 dockerd[7053]: time="2025-12-06T09:17:32.390055429Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:17:54 functional-059985 dockerd[7053]: time="2025-12-06T09:17:54.325269997Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:04 functional-059985 dockerd[7053]: time="2025-12-06T09:18:04.324223249Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:18 functional-059985 dockerd[7053]: time="2025-12-06T09:18:18.246363413Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:18:18 functional-059985 dockerd[7053]: time="2025-12-06T09:18:18.282822570Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:20 functional-059985 dockerd[7053]: time="2025-12-06T09:18:20.244688470Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:18:20 functional-059985 dockerd[7053]: time="2025-12-06T09:18:20.274279035Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:20 functional-059985 dockerd[7053]: time="2025-12-06T09:20:20.390075065Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:20 functional-059985 cri-dockerd[7453]: time="2025-12-06T09:20:20Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Dec 06 09:20:34 functional-059985 dockerd[7053]: 2025/12/06 09:20:34 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 06 09:20:36 functional-059985 dockerd[7053]: time="2025-12-06T09:20:36.577243943Z" level=info msg="sbJoin: gwep4 ''->'1d737d51cb3b', gwep6 ''->''"
	Dec 06 09:20:44 functional-059985 dockerd[7053]: time="2025-12-06T09:20:44.335607968Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:47 functional-059985 dockerd[7053]: time="2025-12-06T09:20:47.318002407Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b29dca31e0b71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   22e1b9c3e3d7f       busybox-mount                               default
	3109ba1ff0eb3       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   bdbb36766496a       hello-node-connect-7d85dfc575-grhpg         default
	553d8db6419ec       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   c3d421dee4c45       hello-node-75c85bcc94-ld9pq                 default
	b7db64a6cbcce       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   f8c1ccfd90716       coredns-66bc5c9577-vxhg7                    kube-system
	6ab49dd2f0ca2       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       2                   c3e6c73344539       storage-provisioner                         kube-system
	0b137720b9a5f       8aa150647e88a                                                                                         6 minutes ago       Running             kube-proxy                2                   2757a4590d002       kube-proxy-v6ctp                            kube-system
	c9adc92dea8d0       a3e246e9556e9                                                                                         6 minutes ago       Running             etcd                      2                   0300a94259bfe       etcd-functional-059985                      kube-system
	9b2ad3a336b2a       88320b5498ff2                                                                                         6 minutes ago       Running             kube-scheduler            2                   939eac1060257       kube-scheduler-functional-059985            kube-system
	0351e6452c14b       a5f569d49a979                                                                                         6 minutes ago       Running             kube-apiserver            0                   d680ffc821477       kube-apiserver-functional-059985            kube-system
	473a7eb2d2899       01e8bacf0f500                                                                                         6 minutes ago       Running             kube-controller-manager   2                   5ba8bcac57036       kube-controller-manager-functional-059985   kube-system
	aa5173745fbac       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       1                   5d0d6464edc32       storage-provisioner                         kube-system
	5626dcb5dc256       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   60bd207fbe42f       coredns-66bc5c9577-vxhg7                    kube-system
	0142d84dfcc3e       01e8bacf0f500                                                                                         7 minutes ago       Exited              kube-controller-manager   1                   cd62f004aeb97       kube-controller-manager-functional-059985   kube-system
	9ed47e1ea713d       88320b5498ff2                                                                                         7 minutes ago       Exited              kube-scheduler            1                   42e6bcee13788       kube-scheduler-functional-059985            kube-system
	50154fcfe42bc       a3e246e9556e9                                                                                         7 minutes ago       Exited              etcd                      1                   568bbcf5b2a5a       etcd-functional-059985                      kube-system
	58a88122473ec       8aa150647e88a                                                                                         7 minutes ago       Exited              kube-proxy                1                   af3f271000a2c       kube-proxy-v6ctp                            kube-system
	
	
	==> coredns [5626dcb5dc25] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41874 - 41765 "HINFO IN 4677883846086172525.3053560101530399949. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.424707544s
	
	
	==> coredns [b7db64a6cbcc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55932 - 63859 "HINFO IN 4009189935978222900.6966185930985369349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022692491s
	
	
	==> describe nodes <==
	Name:               functional-059985
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-059985
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-059985
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_12_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-059985
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:20:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:20:38 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:20:38 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:20:38 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:20:38 +0000   Sat, 06 Dec 2025 09:12:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-059985
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ba47cce0-29a0-432c-b2a4-36f42ef3f157
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-ld9pq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     hello-node-connect-7d85dfc575-grhpg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  default                     mysql-5bb876957f-m9cm6                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     6m9s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-vxhg7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m10s
	  kube-system                 etcd-functional-059985                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m15s
	  kube-system                 kube-apiserver-functional-059985              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-functional-059985     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m15s
	  kube-system                 kube-proxy-v6ctp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-scheduler-functional-059985              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m17s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vrsdw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kjcb6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m9s                   kube-proxy       
	  Normal   Starting                 6m25s                  kube-proxy       
	  Normal   Starting                 7m11s                  kube-proxy       
	  Normal   Starting                 8m20s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m20s (x8 over 8m20s)  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m20s (x8 over 8m20s)  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m20s (x7 over 8m20s)  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m15s                  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  8m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    8m15s                  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m15s                  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m15s                  kubelet          Starting kubelet.
	  Normal   NodeReady                8m14s                  kubelet          Node functional-059985 status is now: NodeReady
	  Normal   RegisteredNode           8m11s                  node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	  Warning  ContainerGCFailed        7m15s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	  Normal   RegisteredNode           7m9s                   node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	  Normal   Starting                 6m29s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m29s (x8 over 6m29s)  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m29s (x8 over 6m29s)  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m29s (x7 over 6m29s)  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m23s                  node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 8a 6f 1a 22 ad 40 08 06
	[  +0.251239] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0e a7 c4 94 ef 5a 08 06
	[  +0.431184] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 41 2d 73 86 ed 08 06
	[  +0.515220] IPv4: martian source 10.244.0.8 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.026943] IPv4: martian source 10.244.0.8 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[Dec 6 09:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 81 03 c8 c4 0c 08 06
	[Dec 6 09:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 ce 4a 36 be 39 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 6e 96 20 7e 61 08 06
	
	
	==> etcd [50154fcfe42b] <==
	{"level":"warn","ts":"2025-12-06T09:13:45.463799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.474165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.481169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.488080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.495192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.502198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.509805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.518679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.525822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.540117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.548217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.557765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.564644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.571437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.579729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.586860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.595028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.602823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.611318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.619029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.638296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.645267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.653542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.660393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.709308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	
	
	==> etcd [c9adc92dea8d] <==
	{"level":"warn","ts":"2025-12-06T09:14:31.308336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.314852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.324098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.331172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.338201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.345838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.352739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.367132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.374191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.388082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.394438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.400942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.407869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.415789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.422814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.429823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.436647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.444223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.451799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.458882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.480324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.484097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.491808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.498307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.539312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53492","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:20:58 up  2:03,  0 user,  load average: 0.19, 0.63, 1.23
	Linux functional-059985 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [0351e6452c14] <==
	I1206 09:14:32.002986       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:14:32.003058       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:14:32.003004       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:14:32.003065       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:14:32.007830       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1206 09:14:32.009314       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:14:32.048749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:14:32.050828       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:14:32.258547       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:14:32.905810       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:14:33.384169       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:14:33.418862       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:14:33.449392       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:14:33.458350       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:14:35.448971       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:14:35.650513       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:14:35.749644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:14:45.456316       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.120.143"}
	I1206 09:14:49.794043       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.127.39"}
	I1206 09:14:49.818443       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.107.147"}
	I1206 09:14:51.393889       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.122.58"}
	I1206 09:15:00.202599       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.219.219"}
	I1206 09:15:23.954224       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:15:24.067722       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.141.231"}
	I1206 09:15:24.077799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.99.20"}
	
	
	==> kube-controller-manager [0142d84dfcc3] <==
	I1206 09:13:49.704844       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:13:49.704821       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:13:49.704860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:13:49.704853       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:13:49.717647       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:49.719806       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:13:49.722112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:13:49.724377       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:13:49.726628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:13:49.731943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:13:49.731958       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:13:49.731964       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:13:49.753720       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:13:49.753748       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:13:49.753778       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:13:49.753812       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:13:49.753848       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:13:49.753849       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:13:49.754948       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:13:49.754966       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:13:49.754998       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:13:49.757287       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:13:49.757330       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:49.758526       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:13:49.775995       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [473a7eb2d289] <==
	I1206 09:14:35.270847       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:14:35.281157       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:14:35.288628       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:14:35.296517       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:14:35.296563       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:14:35.296619       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:14:35.296703       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:14:35.296723       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:14:35.296765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:14:35.296793       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:14:35.296805       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:14:35.296728       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:14:35.300953       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:14:35.310163       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:14:35.310394       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:14:35.310514       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-059985"
	I1206 09:14:35.310586       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:14:35.313498       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:14:35.358814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 09:15:24.003061       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.006722       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.009652       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.011680       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.012828       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.017905       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0b137720b9a5] <==
	I1206 09:14:32.829317       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:14:32.892872       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:14:32.993670       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:14:32.993735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:14:32.993860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:14:33.022817       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:14:33.022898       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:14:33.029718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:14:33.030143       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:14:33.030175       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:14:33.031795       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:14:33.031864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:14:33.031804       1 config.go:200] "Starting service config controller"
	I1206 09:14:33.032094       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:14:33.031831       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:14:33.032130       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:14:33.032680       1 config.go:309] "Starting node config controller"
	I1206 09:14:33.035783       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:14:33.035984       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:14:33.132294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:14:33.132328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:14:33.132354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [58a88122473e] <==
	I1206 09:13:44.331690       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:13:44.408322       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1206 09:13:46.115388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-059985\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 09:13:47.208549       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:13:47.208598       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:13:47.208735       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:13:47.241427       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:13:47.241497       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:13:47.249242       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:13:47.249765       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:13:47.249807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:47.251432       1 config.go:309] "Starting node config controller"
	I1206 09:13:47.251457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:13:47.251643       1 config.go:200] "Starting service config controller"
	I1206 09:13:47.251657       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:13:47.251675       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:13:47.251679       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:13:47.251695       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:13:47.251700       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:13:47.352267       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:13:47.352389       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:13:47.352401       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:13:47.352419       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9b2ad3a336b2] <==
	I1206 09:14:30.743179       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:14:31.945119       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:14:31.945257       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:14:31.945319       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:14:31.945353       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:14:31.967190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:14:31.967216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:14:31.968983       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:14:31.969033       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:14:31.969341       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:14:31.969379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:14:32.069208       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9ed47e1ea713] <==
	I1206 09:13:45.496447       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:13:46.109389       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:13:46.109495       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:13:46.109527       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:13:46.109551       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:13:46.132640       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:13:46.132753       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:46.141991       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:13:46.142035       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:13:46.142546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:13:46.142747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:13:46.242419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392446    8513 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392505    8513 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392591    8513 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx-svc_default(7ca1a0b0-3356-4391-88ae-3a31e43c8a5d): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:20:20 functional-059985 kubelet[8513]: E1206 09:20:20.392620    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:20:21 functional-059985 kubelet[8513]: E1206 09:20:21.225783    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:20:26 functional-059985 kubelet[8513]: E1206 09:20:26.227885    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:20:26 functional-059985 kubelet[8513]: E1206 09:20:26.227885    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:20:30 functional-059985 kubelet[8513]: E1206 09:20:30.228111    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:20:31 functional-059985 kubelet[8513]: E1206 09:20:31.227881    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:20:32 functional-059985 kubelet[8513]: E1206 09:20:32.226606    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:20:41 functional-059985 kubelet[8513]: E1206 09:20:41.228031    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:20:41 functional-059985 kubelet[8513]: E1206 09:20:41.228035    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:20:44 functional-059985 kubelet[8513]: E1206 09:20:44.337733    8513 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 06 09:20:44 functional-059985 kubelet[8513]: E1206 09:20:44.337791    8513 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 06 09:20:44 functional-059985 kubelet[8513]: E1206 09:20:44.337892    8513 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-m9cm6_default(3ea71897-e6a1-4328-8eac-112fea3296e1): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:20:44 functional-059985 kubelet[8513]: E1206 09:20:44.337952    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:20:45 functional-059985 kubelet[8513]: E1206 09:20:45.228287    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:20:47 functional-059985 kubelet[8513]: E1206 09:20:47.320379    8513 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:20:47 functional-059985 kubelet[8513]: E1206 09:20:47.320432    8513 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:20:47 functional-059985 kubelet[8513]: E1206 09:20:47.320545    8513 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(54c1aaaf-c72c-4af5-9ce5-4543673d5a2c): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:20:47 functional-059985 kubelet[8513]: E1206 09:20:47.320588    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:20:53 functional-059985 kubelet[8513]: E1206 09:20:53.228489    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:20:56 functional-059985 kubelet[8513]: E1206 09:20:56.228308    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:20:56 functional-059985 kubelet[8513]: E1206 09:20:56.228338    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:20:58 functional-059985 kubelet[8513]: E1206 09:20:58.228052    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	
	
	==> storage-provisioner [6ab49dd2f0ca] <==
	W1206 09:20:33.542339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:35.545312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:35.548931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:37.552512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:37.556168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:39.559493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:39.563305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:41.566870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:41.570582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:43.574122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:43.579426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:45.582257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:45.585894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:47.588526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:47.592625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:49.595888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:49.601315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:51.604126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:51.607870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:53.611069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:53.615588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:55.619363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:55.623088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:57.625706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:20:57.630012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [aa5173745fba] <==
	I1206 09:13:56.306512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:13:56.314754       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:13:56.314789       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:13:56.317036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:13:59.771858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:04.031955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:07.631138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:10.685130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.707204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.712167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:14:13.712328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:14:13.712496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5!
	I1206 09:14:13.712508       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5608dec-0e94-4ab1-bb57-edede591ea22", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5 became leader
	W1206 09:14:13.714471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.720248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:14:13.812832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5!
	W1206 09:14:15.723800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:15.728402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:17.731603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:17.736896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:19.740106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:19.744502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:21.747960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:21.752249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-059985 -n functional-059985
helpers_test.go:269: (dbg) Run:  kubectl --context functional-059985 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6: exit status 1 (87.465434ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:15:13 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b29dca31e0b7194c528fe0f7d691fa9a293fd3658f5220333a7846f0a0ee5d13
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:15:15 +0000
	      Finished:     Sat, 06 Dec 2025 09:15:15 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b65rl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b65rl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m46s  default-scheduler  Successfully assigned default/busybox-mount to functional-059985
	  Normal  Pulling    5m46s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m44s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.485s (1.485s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m44s  kubelet            Created container: mount-munger
	  Normal  Started    5m44s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-m9cm6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:49 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxxf9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxxf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m9s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-m9cm6 to functional-059985
	  Warning  Failed     6m9s                  kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m5s (x5 over 6m9s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     3m5s (x5 over 6m9s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m5s (x4 over 5m54s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    56s (x21 over 6m8s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     56s (x21 over 6m8s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:51 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jv26d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jv26d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m7s                  default-scheduler  Successfully assigned default/nginx-svc to functional-059985
	  Normal   Pulling    3m27s (x5 over 6m8s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m27s (x5 over 6m7s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m27s (x5 over 6m7s)  kubelet            Error: ErrImagePull
	  Warning  Failed     64s (x20 over 6m7s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    53s (x21 over 6m7s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:56 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rck6k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-rck6k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-059985
	  Normal   Pulling    2m55s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m55s (x5 over 6m2s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m55s (x5 over 6m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    52s (x21 over 6m2s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     52s (x21 over 6m2s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vrsdw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kjcb6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6: exit status 1
E1206 09:22:00.097052  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (367.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-059985 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-m9cm6" [3ea71897-e6a1-4328-8eac-112fea3296e1] Pending
helpers_test.go:352: "mysql-5bb876957f-m9cm6" [3ea71897-e6a1-4328-8eac-112fea3296e1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-059985 -n functional-059985
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-12-06 09:24:50.180574927 +0000 UTC m=+1475.887214772
functional_test.go:1804: (dbg) Run:  kubectl --context functional-059985 describe po mysql-5bb876957f-m9cm6 -n default
functional_test.go:1804: (dbg) kubectl --context functional-059985 describe po mysql-5bb876957f-m9cm6 -n default:
Name:             mysql-5bb876957f-m9cm6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-059985/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:14:49 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxxf9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lxxf9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-m9cm6 to functional-059985
Warning  Failed     10m                     kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    6m56s (x5 over 10m)     kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     6m56s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     6m56s (x4 over 9m45s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m47s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1804: (dbg) Run:  kubectl --context functional-059985 logs mysql-5bb876957f-m9cm6 -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-059985 logs mysql-5bb876957f-m9cm6 -n default: exit status 1 (69.97085ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-m9cm6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-059985 logs mysql-5bb876957f-m9cm6 -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-059985
helpers_test.go:243: (dbg) docker inspect functional-059985:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1",
	        "Created": "2025-12-06T09:12:25.761854111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 595684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:12:25.79611275Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/hosts",
	        "LogPath": "/var/lib/docker/containers/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1/28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1-json.log",
	        "Name": "/functional-059985",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-059985:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-059985",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28465384520d8f0bb60bdecea004fe19f00867aa52e2f88d0db9f6e8144cc5f1",
	                "LowerDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f58f811cd1ba143b7f154b90e3041b92e9aac39862da0e326320338057382175/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-059985",
	                "Source": "/var/lib/docker/volumes/functional-059985/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-059985",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-059985",
	                "name.minikube.sigs.k8s.io": "functional-059985",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "639f0d64dc493dc7d45e24f95d75b88019e64dcc87ad68bb60f67d1c1c02731c",
	            "SandboxKey": "/var/run/docker/netns/639f0d64dc49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-059985": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "196fcf3b074f568c2e7586daa1ecef14111229ee5ba62bcd741749a967fdc6f6",
	                    "EndpointID": "416d03b943ae896e32254e021ca2971c63b8e6f933f20cf6d6a38c64029f3545",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "f2:d4:ba:1c:b0:0d",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-059985",
	                        "28465384520d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-059985 -n functional-059985
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-059985 logs -n 25: (1.003858163s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                            ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-059985 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                   │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ ssh            │ functional-059985 ssh sudo systemctl is-active crio                                                                                                        │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │                     │
	│ image          │ functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image save kicbase/echo-server:functional-059985 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image rm kicbase/echo-server:functional-059985 --alsologtostderr                                                                         │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr                                       │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image save --daemon kicbase/echo-server:functional-059985 --alsologtostderr                                                              │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ ssh            │ functional-059985 ssh sudo cat /etc/test/nested/copy/558759/hosts                                                                                          │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ update-context │ functional-059985 update-context --alsologtostderr -v=2                                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ update-context │ functional-059985 update-context --alsologtostderr -v=2                                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ update-context │ functional-059985 update-context --alsologtostderr -v=2                                                                                                    │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format short --alsologtostderr                                                                                                │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format yaml --alsologtostderr                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ ssh            │ functional-059985 ssh pgrep buildkitd                                                                                                                      │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │                     │
	│ image          │ functional-059985 image build -t localhost/my-image:functional-059985 testdata/build --alsologtostderr                                                     │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls                                                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format json --alsologtostderr                                                                                                 │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	│ image          │ functional-059985 image ls --format table --alsologtostderr                                                                                                │ functional-059985 │ jenkins │ v1.37.0 │ 06 Dec 25 09:20 UTC │ 06 Dec 25 09:20 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:15:22
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:15:22.996535  615253 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:15:22.996811  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.996821  615253 out.go:374] Setting ErrFile to fd 2...
	I1206 09:15:22.996825  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.997221  615253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:15:22.997704  615253 out.go:368] Setting JSON to false
	I1206 09:15:22.998815  615253 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7070,"bootTime":1765005453,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:15:22.998893  615253 start.go:143] virtualization: kvm guest
	I1206 09:15:23.000676  615253 out.go:179] * [functional-059985] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:15:23.002295  615253 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:15:23.002335  615253 notify.go:221] Checking for updates...
	I1206 09:15:23.004817  615253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:15:23.006160  615253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:15:23.007393  615253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:15:23.008584  615253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:15:23.009742  615253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:15:23.011225  615253 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:15:23.011768  615253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:15:23.036100  615253 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:15:23.036221  615253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:15:23.094472  615253 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:15:23.08480577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:15:23.094606  615253 docker.go:319] overlay module found
	I1206 09:15:23.096284  615253 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:15:23.097259  615253 start.go:309] selected driver: docker
	I1206 09:15:23.097272  615253 start.go:927] validating driver "docker" against &{Name:functional-059985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-059985 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:15:23.097371  615253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:15:23.098949  615253 out.go:203] 
	W1206 09:15:23.100067  615253 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:15:23.101109  615253 out.go:203] 
	
	
	==> Docker <==
	Dec 06 09:16:20 functional-059985 dockerd[7053]: time="2025-12-06T09:16:20.341888557Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:30 functional-059985 dockerd[7053]: time="2025-12-06T09:16:30.324229450Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:45 functional-059985 dockerd[7053]: time="2025-12-06T09:16:45.248830981Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:16:45 functional-059985 dockerd[7053]: time="2025-12-06T09:16:45.369872506Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:16:45 functional-059985 cri-dockerd[7453]: time="2025-12-06T09:16:45Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Pulling from kubernetesui/dashboard"
	Dec 06 09:16:50 functional-059985 dockerd[7053]: time="2025-12-06T09:16:50.246879785Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:16:50 functional-059985 dockerd[7053]: time="2025-12-06T09:16:50.280881237Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:17:32 functional-059985 dockerd[7053]: time="2025-12-06T09:17:32.390055429Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:17:54 functional-059985 dockerd[7053]: time="2025-12-06T09:17:54.325269997Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:04 functional-059985 dockerd[7053]: time="2025-12-06T09:18:04.324223249Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:18 functional-059985 dockerd[7053]: time="2025-12-06T09:18:18.246363413Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:18:18 functional-059985 dockerd[7053]: time="2025-12-06T09:18:18.282822570Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:18:20 functional-059985 dockerd[7053]: time="2025-12-06T09:18:20.244688470Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:18:20 functional-059985 dockerd[7053]: time="2025-12-06T09:18:20.274279035Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:20 functional-059985 dockerd[7053]: time="2025-12-06T09:20:20.390075065Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:20 functional-059985 cri-dockerd[7453]: time="2025-12-06T09:20:20Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Dec 06 09:20:34 functional-059985 dockerd[7053]: 2025/12/06 09:20:34 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 06 09:20:36 functional-059985 dockerd[7053]: time="2025-12-06T09:20:36.577243943Z" level=info msg="sbJoin: gwep4 ''->'1d737d51cb3b', gwep6 ''->''"
	Dec 06 09:20:44 functional-059985 dockerd[7053]: time="2025-12-06T09:20:44.335607968Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:20:47 functional-059985 dockerd[7053]: time="2025-12-06T09:20:47.318002407Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:21:07 functional-059985 dockerd[7053]: time="2025-12-06T09:21:07.245823242Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:21:07 functional-059985 dockerd[7053]: time="2025-12-06T09:21:07.350309235Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:21:07 functional-059985 cri-dockerd[7453]: time="2025-12-06T09:21:07Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Pulling from kubernetesui/metrics-scraper"
	Dec 06 09:21:08 functional-059985 dockerd[7053]: time="2025-12-06T09:21:08.245151689Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:21:08 functional-059985 dockerd[7053]: time="2025-12-06T09:21:08.276492887Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b29dca31e0b71       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   22e1b9c3e3d7f       busybox-mount                               default
	3109ba1ff0eb3       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           9 minutes ago       Running             echo-server               0                   bdbb36766496a       hello-node-connect-7d85dfc575-grhpg         default
	553d8db6419ec       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   c3d421dee4c45       hello-node-75c85bcc94-ld9pq                 default
	b7db64a6cbcce       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   2                   f8c1ccfd90716       coredns-66bc5c9577-vxhg7                    kube-system
	6ab49dd2f0ca2       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       2                   c3e6c73344539       storage-provisioner                         kube-system
	0b137720b9a5f       8aa150647e88a                                                                                         10 minutes ago      Running             kube-proxy                2                   2757a4590d002       kube-proxy-v6ctp                            kube-system
	c9adc92dea8d0       a3e246e9556e9                                                                                         10 minutes ago      Running             etcd                      2                   0300a94259bfe       etcd-functional-059985                      kube-system
	9b2ad3a336b2a       88320b5498ff2                                                                                         10 minutes ago      Running             kube-scheduler            2                   939eac1060257       kube-scheduler-functional-059985            kube-system
	0351e6452c14b       a5f569d49a979                                                                                         10 minutes ago      Running             kube-apiserver            0                   d680ffc821477       kube-apiserver-functional-059985            kube-system
	473a7eb2d2899       01e8bacf0f500                                                                                         10 minutes ago      Running             kube-controller-manager   2                   5ba8bcac57036       kube-controller-manager-functional-059985   kube-system
	aa5173745fbac       6e38f40d628db                                                                                         10 minutes ago      Exited              storage-provisioner       1                   5d0d6464edc32       storage-provisioner                         kube-system
	5626dcb5dc256       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   1                   60bd207fbe42f       coredns-66bc5c9577-vxhg7                    kube-system
	0142d84dfcc3e       01e8bacf0f500                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   cd62f004aeb97       kube-controller-manager-functional-059985   kube-system
	9ed47e1ea713d       88320b5498ff2                                                                                         11 minutes ago      Exited              kube-scheduler            1                   42e6bcee13788       kube-scheduler-functional-059985            kube-system
	50154fcfe42bc       a3e246e9556e9                                                                                         11 minutes ago      Exited              etcd                      1                   568bbcf5b2a5a       etcd-functional-059985                      kube-system
	58a88122473ec       8aa150647e88a                                                                                         11 minutes ago      Exited              kube-proxy                1                   af3f271000a2c       kube-proxy-v6ctp                            kube-system
	
	
	==> coredns [5626dcb5dc25] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41874 - 41765 "HINFO IN 4677883846086172525.3053560101530399949. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.424707544s
	
	
	==> coredns [b7db64a6cbcc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55932 - 63859 "HINFO IN 4009189935978222900.6966185930985369349. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022692491s
	
	
	==> describe nodes <==
	Name:               functional-059985
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-059985
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-059985
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_12_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:12:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-059985
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:24:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:24:33 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:24:33 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:24:33 +0000   Sat, 06 Dec 2025 09:12:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:24:33 +0000   Sat, 06 Dec 2025 09:12:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-059985
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                ba47cce0-29a0-432c-b2a4-36f42ef3f157
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-ld9pq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-grhpg           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  default                     mysql-5bb876957f-m9cm6                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-vxhg7                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-059985                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-059985              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-059985     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-v6ctp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-059985              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vrsdw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kjcb6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeReady                12m                kubelet          Node functional-059985 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/cri-dockerd.sock: connect: connection refused"
	  Normal   RegisteredNode           11m                node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-059985 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-059985 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-059985 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-059985 event: Registered Node functional-059985 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 8a 6f 1a 22 ad 40 08 06
	[  +0.251239] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0e a7 c4 94 ef 5a 08 06
	[  +0.431184] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff ae 41 2d 73 86 ed 08 06
	[  +0.515220] IPv4: martian source 10.244.0.8 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.026943] IPv4: martian source 10.244.0.8 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[Dec 6 09:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 81 03 c8 c4 0c 08 06
	[Dec 6 09:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 ce 4a 36 be 39 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 6e 96 20 7e 61 08 06
	
	
	==> etcd [50154fcfe42b] <==
	{"level":"warn","ts":"2025-12-06T09:13:45.463799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.474165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.481169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.488080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.495192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.502198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.509805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.518679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.525822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.540117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.548217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.557765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.564644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.571437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.579729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.586860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.595028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.602823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.611318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.619029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.638296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.645267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.653542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.660393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:13:45.709308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46664","server-name":"","error":"EOF"}
	
	
	==> etcd [c9adc92dea8d] <==
	{"level":"warn","ts":"2025-12-06T09:14:31.331172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.338201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.345838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.352739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.367132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.374191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.388082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.394438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.400942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.407869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.415789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.422814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.429823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.436647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.444223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.451799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.458882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.480324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.484097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.491808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.498307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:14:31.539312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53492","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:24:31.033539Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1387}
	{"level":"info","ts":"2025-12-06T09:24:31.053029Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1387,"took":"19.125995ms","hash":1649980549,"current-db-size-bytes":3932160,"current-db-size":"3.9 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-06T09:24:31.053080Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1649980549,"revision":1387,"compact-revision":-1}
	
	
	==> kernel <==
	 09:24:51 up  2:07,  0 user,  load average: 0.12, 0.33, 0.97
	Linux functional-059985 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [0351e6452c14] <==
	I1206 09:14:32.003058       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:14:32.003004       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 09:14:32.003065       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:14:32.007830       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1206 09:14:32.009314       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:14:32.048749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:14:32.050828       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 09:14:32.258547       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:14:32.905810       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 09:14:33.384169       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:14:33.418862       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:14:33.449392       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:14:33.458350       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:14:35.448971       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:14:35.650513       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:14:35.749644       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:14:45.456316       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.120.143"}
	I1206 09:14:49.794043       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.127.39"}
	I1206 09:14:49.818443       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.107.147"}
	I1206 09:14:51.393889       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.122.58"}
	I1206 09:15:00.202599       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.219.219"}
	I1206 09:15:23.954224       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:15:24.067722       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.141.231"}
	I1206 09:15:24.077799       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.99.20"}
	I1206 09:24:31.922268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [0142d84dfcc3] <==
	I1206 09:13:49.704844       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:13:49.704821       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:13:49.704860       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:13:49.704853       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 09:13:49.717647       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:49.719806       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 09:13:49.722112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 09:13:49.724377       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 09:13:49.726628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 09:13:49.731943       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:13:49.731958       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:13:49.731964       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:13:49.753720       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 09:13:49.753748       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1206 09:13:49.753778       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:13:49.753812       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 09:13:49.753848       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1206 09:13:49.753849       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 09:13:49.754948       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 09:13:49.754966       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 09:13:49.754998       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1206 09:13:49.757287       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 09:13:49.757330       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:13:49.758526       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1206 09:13:49.775995       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [473a7eb2d289] <==
	I1206 09:14:35.270847       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 09:14:35.281157       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 09:14:35.288628       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 09:14:35.296517       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1206 09:14:35.296563       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 09:14:35.296619       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 09:14:35.296703       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1206 09:14:35.296723       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 09:14:35.296765       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 09:14:35.296793       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 09:14:35.296805       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 09:14:35.296728       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 09:14:35.300953       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 09:14:35.310163       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1206 09:14:35.310394       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1206 09:14:35.310514       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-059985"
	I1206 09:14:35.310586       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1206 09:14:35.313498       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 09:14:35.358814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1206 09:15:24.003061       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.006722       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.009652       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.011680       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.012828       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:15:24.017905       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0b137720b9a5] <==
	I1206 09:14:32.829317       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:14:32.892872       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 09:14:32.993670       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:14:32.993735       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:14:32.993860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:14:33.022817       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:14:33.022898       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:14:33.029718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:14:33.030143       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:14:33.030175       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:14:33.031795       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:14:33.031864       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:14:33.031804       1 config.go:200] "Starting service config controller"
	I1206 09:14:33.032094       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:14:33.031831       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:14:33.032130       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:14:33.032680       1 config.go:309] "Starting node config controller"
	I1206 09:14:33.035783       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:14:33.035984       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:14:33.132294       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:14:33.132328       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:14:33.132354       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [58a88122473e] <==
	I1206 09:13:44.331690       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:13:44.408322       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1206 09:13:46.115388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-059985\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 09:13:47.208549       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 09:13:47.208598       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:13:47.208735       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:13:47.241427       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:13:47.241497       1 server_linux.go:132] "Using iptables Proxier"
	I1206 09:13:47.249242       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:13:47.249765       1 server.go:527] "Version info" version="v1.34.2"
	I1206 09:13:47.249807       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:47.251432       1 config.go:309] "Starting node config controller"
	I1206 09:13:47.251457       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:13:47.251643       1 config.go:200] "Starting service config controller"
	I1206 09:13:47.251657       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:13:47.251675       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:13:47.251679       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:13:47.251695       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:13:47.251700       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:13:47.352267       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:13:47.352389       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:13:47.352401       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 09:13:47.352419       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9b2ad3a336b2] <==
	I1206 09:14:30.743179       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:14:31.945119       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:14:31.945257       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:14:31.945319       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:14:31.945353       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:14:31.967190       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:14:31.967216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:14:31.968983       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:14:31.969033       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:14:31.969341       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:14:31.969379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:14:32.069208       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [9ed47e1ea713] <==
	I1206 09:13:45.496447       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:13:46.109389       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:13:46.109495       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:13:46.109527       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:13:46.109551       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:13:46.132640       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1206 09:13:46.132753       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:13:46.141991       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:13:46.142035       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:13:46.142546       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:13:46.142747       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:13:46.242419       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 09:23:50 functional-059985 kubelet[8513]: E1206 09:23:50.227854    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:23:51 functional-059985 kubelet[8513]: E1206 09:23:51.228668    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:23:52 functional-059985 kubelet[8513]: E1206 09:23:52.226208    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:23:53 functional-059985 kubelet[8513]: E1206 09:23:53.228538    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:23:57 functional-059985 kubelet[8513]: E1206 09:23:57.227817    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:24:05 functional-059985 kubelet[8513]: E1206 09:24:05.229016    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:24:06 functional-059985 kubelet[8513]: E1206 09:24:06.228083    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:24:06 functional-059985 kubelet[8513]: E1206 09:24:06.228132    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:24:07 functional-059985 kubelet[8513]: E1206 09:24:07.226465    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:24:08 functional-059985 kubelet[8513]: E1206 09:24:08.228277    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:24:18 functional-059985 kubelet[8513]: E1206 09:24:18.226531    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:24:20 functional-059985 kubelet[8513]: E1206 09:24:20.228087    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:24:20 functional-059985 kubelet[8513]: E1206 09:24:20.228188    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:24:21 functional-059985 kubelet[8513]: E1206 09:24:21.228668    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:24:23 functional-059985 kubelet[8513]: E1206 09:24:23.228975    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:24:31 functional-059985 kubelet[8513]: E1206 09:24:31.228994    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:24:32 functional-059985 kubelet[8513]: E1206 09:24:32.227662    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:24:33 functional-059985 kubelet[8513]: E1206 09:24:33.226310    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:24:35 functional-059985 kubelet[8513]: E1206 09:24:35.228172    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:24:36 functional-059985 kubelet[8513]: E1206 09:24:36.227541    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	Dec 06 09:24:42 functional-059985 kubelet[8513]: E1206 09:24:42.227605    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="7ca1a0b0-3356-4391-88ae-3a31e43c8a5d"
	Dec 06 09:24:46 functional-059985 kubelet[8513]: E1206 09:24:46.225867    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="54c1aaaf-c72c-4af5-9ce5-4543673d5a2c"
	Dec 06 09:24:47 functional-059985 kubelet[8513]: E1206 09:24:47.228283    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vrsdw" podUID="d63534dd-a49a-49cf-a04d-e0d5a7c06cff"
	Dec 06 09:24:47 functional-059985 kubelet[8513]: E1206 09:24:47.228348    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-m9cm6" podUID="3ea71897-e6a1-4328-8eac-112fea3296e1"
	Dec 06 09:24:49 functional-059985 kubelet[8513]: E1206 09:24:49.228206    8513 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kjcb6" podUID="213be88b-e48a-4ac2-b5fb-e3313d535b4a"
	
	
	==> storage-provisioner [6ab49dd2f0ca] <==
	W1206 09:24:26.418899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:28.422516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:28.426734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:30.430082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:30.434293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:32.437676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:32.443517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:34.446845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:34.451605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:36.455152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:36.459135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:38.462980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:38.468036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:40.471242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:40.475798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:42.479140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:42.483995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:44.487544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:44.491691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:46.494695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:46.498366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:48.501643       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:48.506552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:50.509246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:24:50.514267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [aa5173745fba] <==
	I1206 09:13:56.306512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1206 09:13:56.314754       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:13:56.314789       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:13:56.317036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:13:59.771858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:04.031955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:07.631138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:10.685130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.707204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.712167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:14:13.712328       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:14:13.712496       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5!
	I1206 09:14:13.712508       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e5608dec-0e94-4ab1-bb57-edede591ea22", APIVersion:"v1", ResourceVersion:"572", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5 became leader
	W1206 09:14:13.714471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:13.720248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:14:13.812832       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-059985_8926ee2a-6dc8-4633-9118-a97f633706c5!
	W1206 09:14:15.723800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:15.728402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:17.731603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:17.736896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:19.740106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:19.744502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:21.747960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:14:21.752249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-059985 -n functional-059985
helpers_test.go:269: (dbg) Run:  kubectl --context functional-059985 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6: exit status 1 (83.627704ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:15:13 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://b29dca31e0b7194c528fe0f7d691fa9a293fd3658f5220333a7846f0a0ee5d13
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:15:15 +0000
	      Finished:     Sat, 06 Dec 2025 09:15:15 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b65rl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b65rl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m39s  default-scheduler  Successfully assigned default/busybox-mount to functional-059985
	  Normal  Pulling    9m39s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m37s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.485s (1.485s including waiting). Image size: 4403845 bytes.
	  Normal  Created    9m37s  kubelet            Created container: mount-munger
	  Normal  Started    9m37s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-m9cm6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:49 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxxf9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lxxf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/mysql-5bb876957f-m9cm6 to functional-059985
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/mysql:5.7": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m58s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m58s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     6m58s (x4 over 9m47s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4m49s (x21 over 10m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m49s (x21 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:51 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jv26d (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jv26d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/nginx-svc to functional-059985
	  Normal   Pulling    7m20s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m20s (x5 over 10m)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m20s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-059985/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:14:56 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rck6k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-rck6k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m55s                   default-scheduler  Successfully assigned default/sp-pod to functional-059985
	  Normal   Pulling    6m48s (x5 over 9m55s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m48s (x5 over 9m55s)   kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m48s (x5 over 9m55s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m45s (x21 over 9m55s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m45s (x21 over 9m55s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vrsdw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kjcb6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-059985 describe pod busybox-mount mysql-5bb876957f-m9cm6 nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-vrsdw kubernetes-dashboard-855c9754f9-kjcb6: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-059985 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [7ca1a0b0-3356-4391-88ae-3a31e43c8a5d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-059985 -n functional-059985
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-12-06 09:18:51.729244539 +0000 UTC m=+1117.435884192
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-059985 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-059985 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-059985/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:14:51 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jv26d (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jv26d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-059985
Normal   Pulling    79s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     79s (x5 over 3m59s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     79s (x5 over 3m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    0s (x15 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     0s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-059985 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-059985 logs nginx-svc -n default: exit status 1 (64.413461ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-059985 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1206 09:18:51.856321  558759 retry.go:31] will retry after 2.481594425s: Temporary Error: Get "http:": http: no Host in request URL
I1206 09:18:54.338387  558759 retry.go:31] will retry after 2.310171043s: Temporary Error: Get "http:": http: no Host in request URL
I1206 09:18:56.648744  558759 retry.go:31] will retry after 4.577779793s: Temporary Error: Get "http:": http: no Host in request URL
I1206 09:19:01.227676  558759 retry.go:31] will retry after 7.43479426s: Temporary Error: Get "http:": http: no Host in request URL
I1206 09:19:08.663588  558759 retry.go:31] will retry after 18.983285661s: Temporary Error: Get "http:": http: no Host in request URL
I1206 09:19:27.647172  558759 retry.go:31] will retry after 19.234471694s: Temporary Error: Get "http:": http: no Host in request URL
I1206 09:19:46.881965  558759 retry.go:31] will retry after 25.233064019s: Temporary Error: Get "http:": http: no Host in request URL
I1206 09:20:12.115994  558759 retry.go:31] will retry after 31.433691206s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-059985 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.100.122.58   10.100.122.58   80:32532/TCP   5m52s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (111.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-326239 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-326239 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-326239 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-326239 --alsologtostderr -v=1] stderr:
I1206 09:27:50.064878  648563 out.go:360] Setting OutFile to fd 1 ...
I1206 09:27:50.065182  648563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:27:50.065193  648563 out.go:374] Setting ErrFile to fd 2...
I1206 09:27:50.065199  648563 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:27:50.065383  648563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:27:50.065653  648563 mustload.go:66] Loading cluster: functional-326239
I1206 09:27:50.066021  648563 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:27:50.066424  648563 cli_runner.go:164] Run: docker container inspect functional-326239 --format={{.State.Status}}
I1206 09:27:50.084337  648563 host.go:66] Checking if "functional-326239" exists ...
I1206 09:27:50.084613  648563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 09:27:50.139395  648563 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:50.129978539 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1206 09:27:50.139509  648563 api_server.go:166] Checking apiserver status ...
I1206 09:27:50.139553  648563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 09:27:50.139592  648563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326239
I1206 09:27:50.156759  648563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-326239/id_rsa Username:docker}
I1206 09:27:50.255009  648563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9224/cgroup
W1206 09:27:50.263204  648563 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9224/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 09:27:50.263253  648563 ssh_runner.go:195] Run: ls
I1206 09:27:50.266880  648563 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1206 09:27:50.272117  648563 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1206 09:27:50.272173  648563 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1206 09:27:50.272343  648563 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:27:50.272357  648563 addons.go:70] Setting dashboard=true in profile "functional-326239"
I1206 09:27:50.272367  648563 addons.go:239] Setting addon dashboard=true in "functional-326239"
I1206 09:27:50.272390  648563 host.go:66] Checking if "functional-326239" exists ...
I1206 09:27:50.272691  648563 cli_runner.go:164] Run: docker container inspect functional-326239 --format={{.State.Status}}
I1206 09:27:50.292439  648563 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1206 09:27:50.293523  648563 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1206 09:27:50.294553  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1206 09:27:50.294569  648563 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1206 09:27:50.294633  648563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326239
I1206 09:27:50.313417  648563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-326239/id_rsa Username:docker}
I1206 09:27:50.411864  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1206 09:27:50.411893  648563 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1206 09:27:50.425374  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1206 09:27:50.425401  648563 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1206 09:27:50.437797  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1206 09:27:50.437818  648563 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1206 09:27:50.450630  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1206 09:27:50.450649  648563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1206 09:27:50.463141  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1206 09:27:50.463162  648563 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1206 09:27:50.475289  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1206 09:27:50.475311  648563 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1206 09:27:50.487321  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1206 09:27:50.487343  648563 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1206 09:27:50.499254  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1206 09:27:50.499280  648563 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1206 09:27:50.511108  648563 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:27:50.511127  648563 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1206 09:27:50.522887  648563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1206 09:27:50.940802  648563 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-326239 addons enable metrics-server

                                                
                                                
I1206 09:27:50.941864  648563 addons.go:202] Writing out "functional-326239" config to set dashboard=true...
W1206 09:27:50.942139  648563 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1206 09:27:50.942845  648563 kapi.go:59] client config for functional-326239: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.key", CAFile:"/home/jenkins/minikube-integration/22047-555179/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1206 09:27:50.943361  648563 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1206 09:27:50.943379  648563 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1206 09:27:50.943385  648563 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1206 09:27:50.943391  648563 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1206 09:27:50.943396  648563 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1206 09:27:50.950607  648563 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  79c26c3e-024b-4d72-8600-533c2130feb8 872 0 2025-12-06 09:27:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-12-06 09:27:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.177.225,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.177.225],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1206 09:27:50.950762  648563 out.go:285] * Launching proxy ...
* Launching proxy ...
I1206 09:27:50.950816  648563 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-326239 proxy --port 36195]
I1206 09:27:50.951097  648563 dashboard.go:159] Waiting for kubectl to output host:port ...
I1206 09:27:50.998677  648563 out.go:203] 
W1206 09:27:50.999750  648563 out.go:285] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W1206 09:27:50.999768  648563 out.go:285] * 
* 
W1206 09:27:51.004348  648563 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1206 09:27:51.005547  648563 out.go:203] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-326239
helpers_test.go:243: (dbg) docker inspect functional-326239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	        "Created": "2025-12-06T09:24:58.563992644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 627619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:24:58.603754235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hosts",
	        "LogPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b-json.log",
	        "Name": "/functional-326239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-326239:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-326239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	                "LowerDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/merged",
	                "UpperDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/diff",
	                "WorkDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-326239",
	                "Source": "/var/lib/docker/volumes/functional-326239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-326239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-326239",
	                "name.minikube.sigs.k8s.io": "functional-326239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d8644ebd1266b07608ac54125ce4be0a55df19ae0337a89715d5a7b71c158c36",
	            "SandboxKey": "/var/run/docker/netns/d8644ebd1266",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-326239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa31b1b5343ce0077ab7432095e00979a827c42de8b3b6cbea2885bebf249faf",
	                    "EndpointID": "c7f2a99f28f1ab2776d35311f6b88b250c8e144bc0014a825f0b3bc1d8107e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9a:e0:63:fc:a4:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-326239",
	                        "2b1e49e27471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-326239 -n functional-326239
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-326239 logs -n 25: (1.042223708s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh        │ functional-326239 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1490801804/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh -- ls -la /mount-9p                                                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo umount -f /mount-9p                                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount1 --alsologtostderr -v=1                 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount2 --alsologtostderr -v=1                 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount3 --alsologtostderr -v=1                 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh findmnt -T /mount1                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh findmnt -T /mount1                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh findmnt -T /mount2                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh findmnt -T /mount3                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ mount      │ -p functional-326239 --kill=true                                                                                                                    │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/558759.pem                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /usr/share/ca-certificates/558759.pem                                                                                │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/5587592.pem                                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /usr/share/ca-certificates/5587592.pem                                                                               │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start      │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ docker-env │ functional-326239 docker-env                                                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ docker-env │ functional-326239 docker-env                                                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start      │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ start      │ -p functional-326239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0               │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ dashboard  │ --url --port 36195 -p functional-326239 --alsologtostderr -v=1                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:49.849880  648422 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:49.849988  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.849996  648422 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:49.850000  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.850222  648422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:27:49.850643  648422 out.go:368] Setting JSON to false
	I1206 09:27:49.851669  648422 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7817,"bootTime":1765005453,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:49.851731  648422 start.go:143] virtualization: kvm guest
	I1206 09:27:49.853403  648422 out.go:179] * [functional-326239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:49.854528  648422 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:49.854519  648422 notify.go:221] Checking for updates...
	I1206 09:27:49.856047  648422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:49.857413  648422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:27:49.858509  648422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:27:49.859659  648422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:49.860770  648422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:49.862286  648422 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:49.862844  648422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:49.885450  648422 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:27:49.885628  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:49.942760  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.933265972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:49.942863  648422 docker.go:319] overlay module found
	I1206 09:27:49.945331  648422 out.go:179] * Using the docker driver based on existing profile
	I1206 09:27:49.946651  648422 start.go:309] selected driver: docker
	I1206 09:27:49.946664  648422 start.go:927] validating driver "docker" against &{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:49.946748  648422 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:49.946833  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:50.002539  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.993429398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:50.003268  648422 cni.go:84] Creating CNI manager for ""
	I1206 09:27:50.003381  648422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:27:50.003433  648422 start.go:353] cluster config:
	{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:50.005091  648422 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 06 09:27:27 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:27Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Dec 06 09:27:27 functional-326239 dockerd[7374]: time="2025-12-06T09:27:27.626032527Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:27:29 functional-326239 dockerd[7374]: time="2025-12-06T09:27:29.684082674Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=52ecff40c4ff ep=k8s_POD_hello-node-5758569b79-x4599_default_47d3fefa-9586-4712-914d-c9afc666299e_0 net=none nid=9b778ed5217f
	Dec 06 09:27:29 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/56fe66ae142deeda4792f3706612b047345afb7cfab870dd7672b7272b891cee/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:27:29 functional-326239 dockerd[7374]: time="2025-12-06T09:27:29.911518567Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:27:29 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:29Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Dec 06 09:27:31 functional-326239 dockerd[7374]: time="2025-12-06T09:27:31.852011853Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=edeb0fa668f4 ep=k8s_POD_sp-pod_default_8de345de-964f-44a8-9994-19eb0772df93_0 net=none nid=9b778ed5217f
	Dec 06 09:27:31 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/41a93cd0090da52176d42908a13e2443635b21a9c877261daa91da3a938f2102/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:27:32 functional-326239 dockerd[7374]: time="2025-12-06T09:27:32.009112915Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:27:38 functional-326239 dockerd[7374]: time="2025-12-06T09:27:38.020245422Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=aa6a1a62841e ep=k8s_POD_busybox-mount_default_2a52ddca-637a-4932-a22f-cec2d38c1df0_0 net=none nid=9b778ed5217f
	Dec 06 09:27:38 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b0778e6b23fcbc8f9473eae77abe66a0d516879be638e520de64a39be31bd30e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:27:39 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:39Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Dec 06 09:27:39 functional-326239 dockerd[7374]: time="2025-12-06T09:27:39.572778645Z" level=info msg="ignoring event" container=a0fcd81222f118539b5967330da5243f390d47260cea6ccca50207c84ffeab6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:27:41 functional-326239 dockerd[7374]: time="2025-12-06T09:27:41.445728503Z" level=info msg="ignoring event" container=b0778e6b23fcbc8f9473eae77abe66a0d516879be638e520de64a39be31bd30e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 06 09:27:41 functional-326239 dockerd[7374]: time="2025-12-06T09:27:41.818480490Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:27:43 functional-326239 dockerd[7374]: time="2025-12-06T09:27:43.790004620Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:27:43 functional-326239 dockerd[7374]: time="2025-12-06T09:27:43.885848215Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:27:51 functional-326239 dockerd[7374]: time="2025-12-06T09:27:51.341921449Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=5be4c9d01f95 ep=k8s_POD_dashboard-metrics-scraper-5565989548-sbmk5_kubernetes-dashboard_27202e3f-a7f1-4a1d-8885-3705d48bb1b7_0 net=none nid=9b778ed5217f
	Dec 06 09:27:51 functional-326239 dockerd[7374]: time="2025-12-06T09:27:51.343673414Z" level=info msg="sbJoin: gwep4 ''->'', gwep6 ''->''" eid=f1f06777183b ep=k8s_POD_kubernetes-dashboard-b84665fb8-zlv6s_kubernetes-dashboard_ac224612-ccb6-4df5-8fdb-c2360339af04_0 net=none nid=9b778ed5217f
	Dec 06 09:27:51 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8cd607e1acfea4d120c14bcf15d319bc30714cf411e2cbe83a8b4ffd390cce3c/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:27:51 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:27:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/faf265744bc91f294482541c1c4666c60ab19594cb570dd7e96f104b399ff23a/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Dec 06 09:27:51 functional-326239 dockerd[7374]: time="2025-12-06T09:27:51.446430999Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:27:51 functional-326239 dockerd[7374]: time="2025-12-06T09:27:51.478291286Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:27:51 functional-326239 dockerd[7374]: time="2025-12-06T09:27:51.496460992Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:27:51 functional-326239 dockerd[7374]: time="2025-12-06T09:27:51.526457981Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a0fcd81222f11       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   12 seconds ago       Exited              mount-munger              0                   b0778e6b23fcb       busybox-mount                               default
	3dfef744435d8       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                         24 seconds ago       Running             nginx                     0                   e349fc4073f7d       nginx-svc                                   default
	2848af042375d       aa5e3ebc0dfed                                                                                         49 seconds ago       Running             coredns                   2                   7a2e0dfb83c2d       coredns-7d764666f9-dpsjp                    kube-system
	de7b60d3f85f2       8a4ded35a3eb1                                                                                         49 seconds ago       Running             kube-proxy                2                   3fe5270d01a93       kube-proxy-4cczw                            kube-system
	87dba194ce022       6e38f40d628db                                                                                         49 seconds ago       Running             storage-provisioner       3                   b129f4caf585d       storage-provisioner                         kube-system
	89bcf71329c96       45f3cc72d235f                                                                                         52 seconds ago       Running             kube-controller-manager   2                   ee92ed20ec021       kube-controller-manager-functional-326239   kube-system
	2876efeddbfb0       7bb6219ddab95                                                                                         52 seconds ago       Running             kube-scheduler            2                   745e53746de0f       kube-scheduler-functional-326239            kube-system
	9650b470b5357       aa9d02839d8de                                                                                         52 seconds ago       Running             kube-apiserver            0                   946d04b945773       kube-apiserver-functional-326239            kube-system
	252ea51f29295       a3e246e9556e9                                                                                         52 seconds ago       Running             etcd                      2                   49112b9c9d6bb       etcd-functional-326239                      kube-system
	0d96bdf31a92d       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       2                   bed0374ffef7a       storage-provisioner                         kube-system
	a04216aaaa0d9       aa5e3ebc0dfed                                                                                         About a minute ago   Exited              coredns                   1                   bf0bf6c39969e       coredns-7d764666f9-dpsjp                    kube-system
	8f00d318162af       7bb6219ddab95                                                                                         About a minute ago   Exited              kube-scheduler            1                   c651a32ac47bc       kube-scheduler-functional-326239            kube-system
	d13bd55cfe897       a3e246e9556e9                                                                                         About a minute ago   Exited              etcd                      1                   800264e9eef0e       etcd-functional-326239                      kube-system
	e9edf1d23f066       aa9d02839d8de                                                                                         About a minute ago   Exited              kube-apiserver            1                   591c78d5281c6       kube-apiserver-functional-326239            kube-system
	94649648cd635       8a4ded35a3eb1                                                                                         About a minute ago   Exited              kube-proxy                1                   612f3626dc189       kube-proxy-4cczw                            kube-system
	32435a975a61d       45f3cc72d235f                                                                                         About a minute ago   Exited              kube-controller-manager   1                   82259343f519a       kube-controller-manager-functional-326239   kube-system
	
	
	==> coredns [2848af042375] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54375 - 5356 "HINFO IN 2277724273926742442.4370476464319504986. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.442396993s
	
	
	==> coredns [a04216aaaa0d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48241 - 39332 "HINFO IN 252694381531183033.4922054418079394582. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.045627179s
	
	
	==> describe nodes <==
	Name:               functional-326239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-326239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-326239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_25_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-326239
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:27:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:27:31 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:27:31 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:27:31 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:27:31 +0000   Sat, 06 Dec 2025 09:25:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-326239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                35f8e24a-ae6f-4c51-b491-d09628d40f26
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-x4599                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  default                     hello-node-connect-9f67c86d4-zw8gz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 coredns-7d764666f9-dpsjp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m36s
	  kube-system                 etcd-functional-326239                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m43s
	  kube-system                 kube-apiserver-functional-326239              250m (3%)     0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-controller-manager-functional-326239     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 kube-proxy-4cczw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-scheduler-functional-326239              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m42s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-sbmk5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zlv6s          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  2m37s  node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  95s    node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  48s    node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[Dec 6 09:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 81 03 c8 c4 0c 08 06
	[Dec 6 09:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 ce 4a 36 be 39 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 6e 96 20 7e 61 08 06
	[Dec 6 09:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000002] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8c d8 e5 b5 d0 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff fe d1 60 dc a1 8a 08 06
	[Dec 6 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 c0 1c 2b e1 b2 08 06
	[Dec 6 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 33 29 de 35 84 08 06
	
	
	==> etcd [252ea51f2929] <==
	{"level":"warn","ts":"2025-12-06T09:27:00.446427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.453173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.459906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.466081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.472849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.487722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.494784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.501928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.508583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.515262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.522039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.528572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.535363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.547138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.553749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.562415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.569045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.575692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.590169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.596750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.604034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.611260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.618192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.658167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.708674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	
	
	==> etcd [d13bd55cfe89] <==
	{"level":"warn","ts":"2025-12-06T09:26:13.711985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.719248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.728349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.734991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.741886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.748836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.755421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.761648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.768049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.775806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.787179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.794102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.801937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.809234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.817575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.824368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.832430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.840229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.846990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.854824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.862270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.879470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.892726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.899797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.947416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:27:52 up  2:10,  0 user,  load average: 1.10, 0.64, 0.97
	Linux functional-326239 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [9650b470b535] <==
	I1206 09:27:01.162134       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:27:01.162140       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:27:01.165264       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:27:01.167086       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:27:01.167632       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1206 09:27:01.168251       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:27:01.171741       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:01.171760       1 policy_source.go:248] refreshing policies
	I1206 09:27:01.185414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:27:01.792711       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:27:02.069434       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:27:02.847290       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:27:02.882960       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:27:02.913018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:27:02.921097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:27:04.549778       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:27:04.648419       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:27:20.089722       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.126.45"}
	I1206 09:27:25.680231       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.95.210"}
	I1206 09:27:26.265885       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:27:26.342843       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.7.69"}
	I1206 09:27:29.303777       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.78.28"}
	I1206 09:27:50.824501       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:27:50.924501       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.177.225"}
	I1206 09:27:50.933691       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.141.242"}
	
	
	==> kube-apiserver [e9edf1d23f06] <==
	I1206 09:26:14.406937       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 09:26:14.406957       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.406957       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 09:26:14.407047       1 aggregator.go:187] initial CRD sync complete...
	I1206 09:26:14.407060       1 autoregister_controller.go:144] Starting autoregister controller
	I1206 09:26:14.407065       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 09:26:14.407070       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:26:14.407159       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 09:26:14.407406       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.407513       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 09:26:14.407515       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.408282       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 09:26:14.408304       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 09:26:14.409810       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.409833       1 policy_source.go:248] refreshing policies
	I1206 09:26:14.410967       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 09:26:14.413797       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1206 09:26:14.415472       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:26:14.472709       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:26:15.311382       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:26:16.404288       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:26:17.790091       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:26:17.840120       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:26:17.889861       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:26:18.041717       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [32435a975a61] <==
	I1206 09:26:17.543338       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542791       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543445       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:26:17.543505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:26:17.543511       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.543517       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543584       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542776       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543602       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544476       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544653       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544828       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542651       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544968       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545005       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545183       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545874       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542799       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.551452       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.555386       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.643251       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.643285       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:26:17.643293       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:26:17.655982       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [89bcf71329c9] <==
	I1206 09:27:04.304600       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304860       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304948       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304976       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305159       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305209       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:27:04.305279       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305286       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-326239"
	I1206 09:27:04.305372       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1206 09:27:04.305634       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305646       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305679       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305765       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306019       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306689       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306950       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:04.403997       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.404025       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:27:04.404030       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:27:04.407939       1 shared_informer.go:377] "Caches are synced"
	E1206 09:27:50.867219       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.873234       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.879428       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.880773       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.883077       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [94649648cd63] <==
	I1206 09:26:12.602160       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:26:12.674751       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.377086       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.377355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:26:14.377596       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:26:14.416940       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:26:14.417016       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:26:14.423977       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:26:14.425728       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:26:14.425755       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.427928       1 config.go:200] "Starting service config controller"
	I1206 09:26:14.427960       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:26:14.428012       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:26:14.428023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:26:14.428030       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:26:14.428043       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:26:14.428068       1 config.go:309] "Starting node config controller"
	I1206 09:26:14.428083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:26:14.428090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:26:14.529013       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:26:14.529035       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:26:14.529051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [de7b60d3f85f] <==
	I1206 09:27:02.323363       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:27:02.393893       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:02.494373       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:02.494413       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:27:02.494497       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:27:02.516848       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:27:02.516936       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:27:02.522473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:27:02.522825       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:27:02.522848       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:02.524250       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:27:02.524275       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:27:02.524276       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:27:02.524301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:27:02.524280       1 config.go:200] "Starting service config controller"
	I1206 09:27:02.524351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:27:02.524372       1 config.go:309] "Starting node config controller"
	I1206 09:27:02.524378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:27:02.624406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:27:02.624426       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:27:02.624452       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:27:02.624569       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2876efeddbfb] <==
	I1206 09:26:59.727097       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:27:01.087550       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:27:01.087780       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:27:01.087813       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:27:01.087823       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:27:01.107671       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:27:01.107709       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:01.110792       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:27:01.110819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:27:01.110831       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:01.111018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:27:01.211217       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [8f00d318162a] <==
	I1206 09:26:13.180671       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:26:14.316173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:26:14.316208       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:26:14.316219       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:26:14.316229       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:26:14.355000       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:26:14.355040       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.360961       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:26:14.361136       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:26:14.361154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:26:14.361969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.462893       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:27:41 functional-326239 kubelet[8745]: E1206 09:27:41.821289    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-x4599" podUID="47d3fefa-9586-4712-914d-c9afc666299e"
	Dec 06 09:27:42 functional-326239 kubelet[8745]: I1206 09:27:42.325335    8745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0778e6b23fcbc8f9473eae77abe66a0d516879be638e520de64a39be31bd30e"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.792341    8745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.792405    8745 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.792734    8745 kuberuntime_manager.go:1664] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-9f67c86d4-zw8gz_default(fd8e4b13-e45a-40be-9fa0-1e7579b8d00f): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.792786    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.888287    8745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.888344    8745 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.888548    8745 kuberuntime_manager.go:1664] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(8de345de-964f-44a8-9994-19eb0772df93): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:27:43 functional-326239 kubelet[8745]: E1206 09:27:43.888595    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:27:50 functional-326239 kubelet[8745]: I1206 09:27:50.998541    8745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/27202e3f-a7f1-4a1d-8885-3705d48bb1b7-tmp-volume\") pod \"dashboard-metrics-scraper-5565989548-sbmk5\" (UID: \"27202e3f-a7f1-4a1d-8885-3705d48bb1b7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5"
	Dec 06 09:27:50 functional-326239 kubelet[8745]: I1206 09:27:50.998598    8745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6v28\" (UniqueName: \"kubernetes.io/projected/ac224612-ccb6-4df5-8fdb-c2360339af04-kube-api-access-b6v28\") pod \"kubernetes-dashboard-b84665fb8-zlv6s\" (UID: \"ac224612-ccb6-4df5-8fdb-c2360339af04\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s"
	Dec 06 09:27:50 functional-326239 kubelet[8745]: I1206 09:27:50.998628    8745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac224612-ccb6-4df5-8fdb-c2360339af04-tmp-volume\") pod \"kubernetes-dashboard-b84665fb8-zlv6s\" (UID: \"ac224612-ccb6-4df5-8fdb-c2360339af04\") " pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s"
	Dec 06 09:27:50 functional-326239 kubelet[8745]: I1206 09:27:50.998715    8745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lcww\" (UniqueName: \"kubernetes.io/projected/27202e3f-a7f1-4a1d-8885-3705d48bb1b7-kube-api-access-9lcww\") pod \"dashboard-metrics-scraper-5565989548-sbmk5\" (UID: \"27202e3f-a7f1-4a1d-8885-3705d48bb1b7\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.480628    8745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.480688    8745 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.481113    8745 kuberuntime_manager.go:1664] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-b84665fb8-zlv6s_kubernetes-dashboard(ac224612-ccb6-4df5-8fdb-c2360339af04): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.481174    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.529050    8745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.529125    8745 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.529382    8745 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-5565989548-sbmk5_kubernetes-dashboard(27202e3f-a7f1-4a1d-8885-3705d48bb1b7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:27:51 functional-326239 kubelet[8745]: E1206 09:27:51.529442    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:27:52 functional-326239 kubelet[8745]: E1206 09:27:52.438932    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" containerName="kubernetes-dashboard"
	Dec 06 09:27:52 functional-326239 kubelet[8745]: E1206 09:27:52.439005    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:27:52 functional-326239 kubelet[8745]: E1206 09:27:52.441241    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	
	
	==> storage-provisioner [0d96bdf31a92] <==
	I1206 09:26:24.816621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:26:24.816671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:26:24.818753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:28.274068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:32.535027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:36.133306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:39.189800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.212252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.218505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.218741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:26:42.218864       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7a2486b-50c5-43aa-87c6-fe9171bc66e3", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4 became leader
	I1206 09:26:42.218959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:42.220599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.223472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.320091       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:44.227019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:44.230865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.234176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.238415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.242019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.246112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.249752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.254841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.258225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.262127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [87dba194ce02] <==
	I1206 09:27:31.262989       1 volume_store.go:212] Trying to save persistentvolume "pvc-b76995da-1783-4b19-884a-7c6372e29852"
	I1206 09:27:31.272371       1 volume_store.go:219] persistentvolume "pvc-b76995da-1783-4b19-884a-7c6372e29852" saved
	I1206 09:27:31.272522       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"b76995da-1783-4b19-884a-7c6372e29852", APIVersion:"v1", ResourceVersion:"765", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b76995da-1783-4b19-884a-7c6372e29852
	W1206 09:27:31.722713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:31.726262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:33.729533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:33.733296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:35.736317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:35.740863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:37.743974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:37.749257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:39.752004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:39.757041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:41.760103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:41.764859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:43.768260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:43.773466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:45.776461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:45.780247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:47.783881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:47.788117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:49.791585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:49.797004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:51.800806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:27:51.806229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
helpers_test.go:269: (dbg) Run:  kubectl --context functional-326239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1 (85.759858ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://a0fcd81222f118539b5967330da5243f390d47260cea6ccca50207c84ffeab6c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:27:39 +0000
	      Finished:     Sat, 06 Dec 2025 09:27:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gpqn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5gpqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  15s   default-scheduler  Successfully assigned default/busybox-mount to functional-326239
	  Normal  Pulling    15s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     14s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.363s (1.363s including waiting). Image size: 4403845 bytes.
	  Normal  Created    14s   kubelet            Container created
	  Normal  Started    14s   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-x4599
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ldsr2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ldsr2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  23s                default-scheduler  Successfully assigned default/hello-node-5758569b79-x4599 to functional-326239
	  Warning  Failed     24s                kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12s (x2 over 24s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x2 over 24s)  kubelet            Error: ErrImagePull
	  Warning  Failed     12s                kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    1s (x2 over 23s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x2 over 23s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-zw8gz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:26 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2872p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2872p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  26s                default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zw8gz to functional-326239
	  Normal   BackOff    25s                kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     25s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    10s (x2 over 27s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x2 over 26s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     10s (x2 over 26s)  kubelet            Error: ErrImagePull
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6q4p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-w6q4p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  21s                default-scheduler  Successfully assigned default/sp-pod to functional-326239
	  Normal   BackOff    21s                kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     21s                kubelet            Error: ImagePullBackOff
	  Normal   Pulling    10s (x2 over 22s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     10s (x2 over 21s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     10s (x2 over 21s)  kubelet            Error: ErrImagePull

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-sbmk5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-zlv6s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (3.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-326239 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-326239 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-zw8gz" [fd8e4b13-e45a-40be-9fa0-1e7579b8d00f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
functional_test.go:1645: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-12-06 09:37:26.67711809 +0000 UTC m=+2232.383757742
functional_test.go:1645: (dbg) Run:  kubectl --context functional-326239 describe po hello-node-connect-9f67c86d4-zw8gz -n default
functional_test.go:1645: (dbg) kubectl --context functional-326239 describe po hello-node-connect-9f67c86d4-zw8gz -n default:
Name:             hello-node-connect-9f67c86d4-zw8gz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326239/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:27:26 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2872p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2872p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zw8gz to functional-326239
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-326239 logs hello-node-connect-9f67c86d4-zw8gz -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-326239 logs hello-node-connect-9f67c86d4-zw8gz -n default: exit status 1 (65.190708ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-zw8gz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-326239 logs hello-node-connect-9f67c86d4-zw8gz -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-326239 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-9f67c86d4-zw8gz
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326239/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:27:26 +0000
Labels:           app=hello-node-connect
pod-template-hash=9f67c86d4
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2872p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2872p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zw8gz to functional-326239
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-326239 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-326239 logs -l app=hello-node-connect: exit status 1 (61.941832ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-9f67c86d4-zw8gz" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-326239 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-326239 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.7.69
IPs:                      10.111.7.69
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32024/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-326239
helpers_test.go:243: (dbg) docker inspect functional-326239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	        "Created": "2025-12-06T09:24:58.563992644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 627619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:24:58.603754235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hosts",
	        "LogPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b-json.log",
	        "Name": "/functional-326239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-326239:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-326239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	                "LowerDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/merged",
	                "UpperDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/diff",
	                "WorkDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-326239",
	                "Source": "/var/lib/docker/volumes/functional-326239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-326239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-326239",
	                "name.minikube.sigs.k8s.io": "functional-326239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d8644ebd1266b07608ac54125ce4be0a55df19ae0337a89715d5a7b71c158c36",
	            "SandboxKey": "/var/run/docker/netns/d8644ebd1266",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-326239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa31b1b5343ce0077ab7432095e00979a827c42de8b3b6cbea2885bebf249faf",
	                    "EndpointID": "c7f2a99f28f1ab2776d35311f6b88b250c8e144bc0014a825f0b3bc1d8107e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9a:e0:63:fc:a4:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-326239",
	                        "2b1e49e27471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-326239 -n functional-326239
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-326239 ssh findmnt -T /mount3                                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ mount          │ -p functional-326239 --kill=true                                                                                                                │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh            │ functional-326239 ssh sudo cat /etc/ssl/certs/558759.pem                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh            │ functional-326239 ssh sudo cat /usr/share/ca-certificates/558759.pem                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh            │ functional-326239 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh            │ functional-326239 ssh sudo cat /etc/ssl/certs/5587592.pem                                                                                       │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh            │ functional-326239 ssh sudo cat /usr/share/ca-certificates/5587592.pem                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh            │ functional-326239 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start          │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ docker-env     │ functional-326239 docker-env                                                                                                                    │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ docker-env     │ functional-326239 docker-env                                                                                                                    │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start          │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ start          │ -p functional-326239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-326239 --alsologtostderr -v=1                                                                                  │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh            │ functional-326239 ssh sudo cat /etc/test/nested/copy/558759/hosts                                                                               │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ image          │ functional-326239 image ls --format short --alsologtostderr                                                                                     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls --format yaml --alsologtostderr                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-326239 ssh pgrep buildkitd                                                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ image          │ functional-326239 image build -t localhost/my-image:functional-326239 testdata/build --alsologtostderr                                          │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls                                                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls --format json --alsologtostderr                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls --format table --alsologtostderr                                                                                     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-326239 update-context --alsologtostderr -v=2                                                                                         │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-326239 update-context --alsologtostderr -v=2                                                                                         │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-326239 update-context --alsologtostderr -v=2                                                                                         │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:49.849880  648422 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:49.849988  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.849996  648422 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:49.850000  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.850222  648422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:27:49.850643  648422 out.go:368] Setting JSON to false
	I1206 09:27:49.851669  648422 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7817,"bootTime":1765005453,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:49.851731  648422 start.go:143] virtualization: kvm guest
	I1206 09:27:49.853403  648422 out.go:179] * [functional-326239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:49.854528  648422 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:49.854519  648422 notify.go:221] Checking for updates...
	I1206 09:27:49.856047  648422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:49.857413  648422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:27:49.858509  648422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:27:49.859659  648422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:49.860770  648422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:49.862286  648422 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:49.862844  648422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:49.885450  648422 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:27:49.885628  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:49.942760  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.933265972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:49.942863  648422 docker.go:319] overlay module found
	I1206 09:27:49.945331  648422 out.go:179] * Using the docker driver based on existing profile
	I1206 09:27:49.946651  648422 start.go:309] selected driver: docker
	I1206 09:27:49.946664  648422 start.go:927] validating driver "docker" against &{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:49.946748  648422 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:49.946833  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:50.002539  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.993429398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:50.003268  648422 cni.go:84] Creating CNI manager for ""
	I1206 09:27:50.003381  648422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:27:50.003433  648422 start.go:353] cluster config:
	{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:50.005091  648422 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 06 09:29:14 functional-326239 dockerd[7374]: time="2025-12-06T09:29:14.793648667Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:18 functional-326239 dockerd[7374]: time="2025-12-06T09:29:18.712628260Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:29:18 functional-326239 dockerd[7374]: time="2025-12-06T09:29:18.743697326Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:25 functional-326239 dockerd[7374]: time="2025-12-06T09:29:25.713512639Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:29:25 functional-326239 dockerd[7374]: time="2025-12-06T09:29:25.742826462Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:13 functional-326239 dockerd[7374]: time="2025-12-06T09:30:13.881378097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:13 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:30:13Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Dec 06 09:30:24 functional-326239 dockerd[7374]: time="2025-12-06T09:30:24.809413731Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:27 functional-326239 dockerd[7374]: time="2025-12-06T09:30:27.791255008Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.796185949Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.814405222Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.843920119Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:48 functional-326239 dockerd[7374]: time="2025-12-06T09:30:48.713308009Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:30:48 functional-326239 dockerd[7374]: time="2025-12-06T09:30:48.741239181Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:00 functional-326239 dockerd[7374]: time="2025-12-06T09:33:00.864570088Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:00 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:33:00Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Dec 06 09:33:05 functional-326239 dockerd[7374]: time="2025-12-06T09:33:05.787693599Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:13 functional-326239 dockerd[7374]: time="2025-12-06T09:33:13.797758574Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:27 functional-326239 dockerd[7374]: time="2025-12-06T09:33:27.710130847Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:33:27 functional-326239 dockerd[7374]: time="2025-12-06T09:33:27.739423694Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:29 functional-326239 dockerd[7374]: time="2025-12-06T09:33:29.785671239Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:35 functional-326239 dockerd[7374]: 2025/12/06 09:33:35 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 06 09:33:36 functional-326239 dockerd[7374]: time="2025-12-06T09:33:36.718549754Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:33:36 functional-326239 dockerd[7374]: time="2025-12-06T09:33:36.750032252Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:37 functional-326239 dockerd[7374]: time="2025-12-06T09:33:37.176798777Z" level=info msg="sbJoin: gwep4 ''->'9364f6c5debd', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a0fcd81222f11       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              mount-munger              0                   b0778e6b23fcb       busybox-mount                               default
	3dfef744435d8       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                         10 minutes ago      Running             nginx                     0                   e349fc4073f7d       nginx-svc                                   default
	2848af042375d       aa5e3ebc0dfed                                                                                         10 minutes ago      Running             coredns                   2                   7a2e0dfb83c2d       coredns-7d764666f9-dpsjp                    kube-system
	de7b60d3f85f2       8a4ded35a3eb1                                                                                         10 minutes ago      Running             kube-proxy                2                   3fe5270d01a93       kube-proxy-4cczw                            kube-system
	87dba194ce022       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       3                   b129f4caf585d       storage-provisioner                         kube-system
	89bcf71329c96       45f3cc72d235f                                                                                         10 minutes ago      Running             kube-controller-manager   2                   ee92ed20ec021       kube-controller-manager-functional-326239   kube-system
	2876efeddbfb0       7bb6219ddab95                                                                                         10 minutes ago      Running             kube-scheduler            2                   745e53746de0f       kube-scheduler-functional-326239            kube-system
	9650b470b5357       aa9d02839d8de                                                                                         10 minutes ago      Running             kube-apiserver            0                   946d04b945773       kube-apiserver-functional-326239            kube-system
	252ea51f29295       a3e246e9556e9                                                                                         10 minutes ago      Running             etcd                      2                   49112b9c9d6bb       etcd-functional-326239                      kube-system
	0d96bdf31a92d       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       2                   bed0374ffef7a       storage-provisioner                         kube-system
	a04216aaaa0d9       aa5e3ebc0dfed                                                                                         11 minutes ago      Exited              coredns                   1                   bf0bf6c39969e       coredns-7d764666f9-dpsjp                    kube-system
	8f00d318162af       7bb6219ddab95                                                                                         11 minutes ago      Exited              kube-scheduler            1                   c651a32ac47bc       kube-scheduler-functional-326239            kube-system
	d13bd55cfe897       a3e246e9556e9                                                                                         11 minutes ago      Exited              etcd                      1                   800264e9eef0e       etcd-functional-326239                      kube-system
	94649648cd635       8a4ded35a3eb1                                                                                         11 minutes ago      Exited              kube-proxy                1                   612f3626dc189       kube-proxy-4cczw                            kube-system
	32435a975a61d       45f3cc72d235f                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   82259343f519a       kube-controller-manager-functional-326239   kube-system
	
	
	==> coredns [2848af042375] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54375 - 5356 "HINFO IN 2277724273926742442.4370476464319504986. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.442396993s
	
	
	==> coredns [a04216aaaa0d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48241 - 39332 "HINFO IN 252694381531183033.4922054418079394582. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.045627179s
	
	
	==> describe nodes <==
	Name:               functional-326239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-326239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-326239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_25_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-326239
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:37:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-326239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                35f8e24a-ae6f-4c51-b491-d09628d40f26
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-x4599                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-connect-9f67c86d4-zw8gz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-87mns                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m34s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 coredns-7d764666f9-dpsjp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-326239                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-326239              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-326239     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4cczw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-326239              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-sbmk5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zlv6s          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[Dec 6 09:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 81 03 c8 c4 0c 08 06
	[Dec 6 09:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 ce 4a 36 be 39 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 6e 96 20 7e 61 08 06
	[Dec 6 09:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000002] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8c d8 e5 b5 d0 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff fe d1 60 dc a1 8a 08 06
	[Dec 6 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 c0 1c 2b e1 b2 08 06
	[Dec 6 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 33 29 de 35 84 08 06
	
	
	==> etcd [252ea51f2929] <==
	{"level":"warn","ts":"2025-12-06T09:27:00.466081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.472849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.487722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.494784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.501928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.508583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.515262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.522039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.528572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.535363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.547138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.553749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.562415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.569045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.575692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.590169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.596750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.604034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.611260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.618192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.658167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.708674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:37:00.196870Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1414}
	{"level":"info","ts":"2025-12-06T09:37:00.217718Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1414,"took":"20.421611ms","hash":615152118,"current-db-size-bytes":4030464,"current-db-size":"4.0 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-06T09:37:00.217773Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":615152118,"revision":1414,"compact-revision":-1}
	
	
	==> etcd [d13bd55cfe89] <==
	{"level":"warn","ts":"2025-12-06T09:26:13.711985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.719248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.728349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.734991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.741886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.748836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.755421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.761648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.768049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.775806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.787179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.794102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.801937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.809234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.817575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.824368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.832430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.840229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.846990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.854824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.862270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.879470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.892726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.899797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.947416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:37:28 up  2:19,  0 user,  load average: 0.18, 0.21, 0.56
	Linux functional-326239 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [9650b470b535] <==
	I1206 09:27:01.165264       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:27:01.167086       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:27:01.167632       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1206 09:27:01.168251       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:27:01.171741       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:01.171760       1 policy_source.go:248] refreshing policies
	I1206 09:27:01.185414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:27:01.792711       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:27:02.069434       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:27:02.847290       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:27:02.882960       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:27:02.913018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:27:02.921097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:27:04.549778       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:27:04.648419       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:27:20.089722       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.126.45"}
	I1206 09:27:25.680231       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.95.210"}
	I1206 09:27:26.265885       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:27:26.342843       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.7.69"}
	I1206 09:27:29.303777       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.78.28"}
	I1206 09:27:50.824501       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:27:50.924501       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.177.225"}
	I1206 09:27:50.933691       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.141.242"}
	I1206 09:27:53.497457       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.111.90"}
	I1206 09:37:01.092237       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [32435a975a61] <==
	I1206 09:26:17.543338       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542791       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543445       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:26:17.543505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:26:17.543511       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.543517       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543584       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542776       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543602       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544476       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544653       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544828       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542651       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544968       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545005       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545183       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545874       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542799       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.551452       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.555386       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.643251       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.643285       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:26:17.643293       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:26:17.655982       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [89bcf71329c9] <==
	I1206 09:27:04.304600       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304860       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304948       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304976       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305159       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305209       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:27:04.305279       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305286       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-326239"
	I1206 09:27:04.305372       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1206 09:27:04.305634       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305646       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305679       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305765       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306019       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306689       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306950       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:04.403997       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.404025       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:27:04.404030       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:27:04.407939       1 shared_informer.go:377] "Caches are synced"
	E1206 09:27:50.867219       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.873234       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.879428       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.880773       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.883077       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [94649648cd63] <==
	I1206 09:26:12.602160       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:26:12.674751       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.377086       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.377355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:26:14.377596       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:26:14.416940       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:26:14.417016       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:26:14.423977       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:26:14.425728       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:26:14.425755       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.427928       1 config.go:200] "Starting service config controller"
	I1206 09:26:14.427960       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:26:14.428012       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:26:14.428023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:26:14.428030       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:26:14.428043       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:26:14.428068       1 config.go:309] "Starting node config controller"
	I1206 09:26:14.428083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:26:14.428090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:26:14.529013       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:26:14.529035       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:26:14.529051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [de7b60d3f85f] <==
	I1206 09:27:02.323363       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:27:02.393893       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:02.494373       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:02.494413       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:27:02.494497       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:27:02.516848       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:27:02.516936       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:27:02.522473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:27:02.522825       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:27:02.522848       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:02.524250       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:27:02.524275       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:27:02.524276       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:27:02.524301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:27:02.524280       1 config.go:200] "Starting service config controller"
	I1206 09:27:02.524351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:27:02.524372       1 config.go:309] "Starting node config controller"
	I1206 09:27:02.524378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:27:02.624406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:27:02.624426       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:27:02.624452       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:27:02.624569       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2876efeddbfb] <==
	I1206 09:26:59.727097       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:27:01.087550       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:27:01.087780       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:27:01.087813       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:27:01.087823       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:27:01.107671       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:27:01.107709       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:01.110792       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:27:01.110819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:27:01.110831       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:01.111018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:27:01.211217       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [8f00d318162a] <==
	I1206 09:26:13.180671       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:26:14.316173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:26:14.316208       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:26:14.316219       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:26:14.316229       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:26:14.355000       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:26:14.355040       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.360961       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:26:14.361136       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:26:14.361154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:26:14.361969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.462893       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:36:49 functional-326239 kubelet[8745]: E1206 09:36:49.696400    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	Dec 06 09:36:50 functional-326239 kubelet[8745]: E1206 09:36:50.694557    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:36:53 functional-326239 kubelet[8745]: E1206 09:36:53.694216    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:36:55 functional-326239 kubelet[8745]: E1206 09:36:55.693661    8745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-functional-326239" containerName="etcd"
	Dec 06 09:36:59 functional-326239 kubelet[8745]: E1206 09:36:59.693773    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:36:59 functional-326239 kubelet[8745]: E1206 09:36:59.696206    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:37:00 functional-326239 kubelet[8745]: E1206 09:37:00.694472    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" containerName="kubernetes-dashboard"
	Dec 06 09:37:00 functional-326239 kubelet[8745]: E1206 09:37:00.697061    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:37:01 functional-326239 kubelet[8745]: E1206 09:37:01.694827    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-x4599" podUID="47d3fefa-9586-4712-914d-c9afc666299e"
	Dec 06 09:37:03 functional-326239 kubelet[8745]: E1206 09:37:03.696689    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	Dec 06 09:37:05 functional-326239 kubelet[8745]: E1206 09:37:05.694297    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:37:06 functional-326239 kubelet[8745]: E1206 09:37:06.694657    8745 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-dpsjp" containerName="coredns"
	Dec 06 09:37:07 functional-326239 kubelet[8745]: E1206 09:37:07.694687    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:37:12 functional-326239 kubelet[8745]: E1206 09:37:12.693980    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:37:12 functional-326239 kubelet[8745]: E1206 09:37:12.696654    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:37:14 functional-326239 kubelet[8745]: E1206 09:37:14.694845    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-x4599" podUID="47d3fefa-9586-4712-914d-c9afc666299e"
	Dec 06 09:37:14 functional-326239 kubelet[8745]: E1206 09:37:14.696466    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	Dec 06 09:37:15 functional-326239 kubelet[8745]: E1206 09:37:15.693559    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" containerName="kubernetes-dashboard"
	Dec 06 09:37:15 functional-326239 kubelet[8745]: E1206 09:37:15.695771    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:37:19 functional-326239 kubelet[8745]: E1206 09:37:19.694395    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:37:19 functional-326239 kubelet[8745]: E1206 09:37:19.694626    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:37:25 functional-326239 kubelet[8745]: E1206 09:37:25.694630    8745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-326239" containerName="kube-controller-manager"
	Dec 06 09:37:26 functional-326239 kubelet[8745]: E1206 09:37:26.694698    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:37:26 functional-326239 kubelet[8745]: E1206 09:37:26.697228    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:37:26 functional-326239 kubelet[8745]: E1206 09:37:26.697564    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	
	
	==> storage-provisioner [0d96bdf31a92] <==
	I1206 09:26:24.816621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:26:24.816671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:26:24.818753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:28.274068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:32.535027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:36.133306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:39.189800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.212252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.218505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.218741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:26:42.218864       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7a2486b-50c5-43aa-87c6-fe9171bc66e3", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4 became leader
	I1206 09:26:42.218959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:42.220599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.223472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.320091       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:44.227019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:44.230865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.234176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.238415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.242019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.246112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.249752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.254841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.258225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.262127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [87dba194ce02] <==
	W1206 09:37:03.894312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:05.897962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:05.901946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:07.905329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:07.910787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:09.913582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:09.918263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:11.921256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:11.925221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:13.928799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:13.932859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:15.936639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:15.941905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:17.944731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:17.948899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:19.951651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:19.955358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:21.958331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:21.963461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:23.966944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:23.970931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:25.973931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:25.977662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:27.980230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:27.983542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
helpers_test.go:269: (dbg) Run:  kubectl --context functional-326239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1 (89.826961ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://a0fcd81222f118539b5967330da5243f390d47260cea6ccca50207c84ffeab6c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:27:39 +0000
	      Finished:     Sat, 06 Dec 2025 09:27:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gpqn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5gpqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-326239
	  Normal  Pulling    9m50s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m49s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.363s (1.363s including waiting). Image size: 4403845 bytes.
	  Normal  Created    9m49s  kubelet            Container created
	  Normal  Started    9m49s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-x4599
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ldsr2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ldsr2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m59s                   default-scheduler  Successfully assigned default/hello-node-5758569b79-x4599 to functional-326239
	  Warning  Failed     8m42s (x3 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m15s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m15s (x2 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m15s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m55s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m55s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-zw8gz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:26 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2872p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2872p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zw8gz to functional-326239
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m38s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-87mns
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:53 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:           10.244.0.16
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxj6s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xxj6s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m35s                   default-scheduler  Successfully assigned default/mysql-844cf969f6-87mns to functional-326239
	  Normal   Pulling    6m42s (x5 over 9m34s)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     6m42s (x5 over 9m34s)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m42s (x5 over 9m34s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m28s (x20 over 9m34s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m13s (x21 over 9m34s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6q4p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-w6q4p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m57s                   default-scheduler  Successfully assigned default/sp-pod to functional-326239
	  Normal   Pulling    7m1s (x5 over 9m57s)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m1s (x5 over 9m56s)    kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m1s (x5 over 9m56s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 9m56s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m52s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-sbmk5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-zlv6s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (602.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6e67244f-9b6d-4f27-9858-a97a98e01fcc] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003966264s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-326239 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-326239 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-326239 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-326239 apply -f testdata/storage-provisioner/pod.yaml
I1206 09:27:31.438756  558759 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8de345de-964f-44a8-9994-19eb0772df93] Pending
helpers_test.go:352: "sp-pod" [8de345de-964f-44a8-9994-19eb0772df93] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-06 09:33:31.757863893 +0000 UTC m=+1997.464503553
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-326239 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-326239 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326239/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:27:31 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:  10.244.0.11
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6q4p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-w6q4p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m                    default-scheduler  Successfully assigned default/sp-pod to functional-326239
Normal   Pulling    3m4s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m4s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     3m4s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    55s (x21 over 5m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     55s (x21 over 5m59s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-326239 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-326239 logs sp-pod -n default: exit status 1 (72.710382ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-326239 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-326239
helpers_test.go:243: (dbg) docker inspect functional-326239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	        "Created": "2025-12-06T09:24:58.563992644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 627619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:24:58.603754235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hosts",
	        "LogPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b-json.log",
	        "Name": "/functional-326239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-326239:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-326239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	                "LowerDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/merged",
	                "UpperDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/diff",
	                "WorkDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-326239",
	                "Source": "/var/lib/docker/volumes/functional-326239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-326239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-326239",
	                "name.minikube.sigs.k8s.io": "functional-326239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d8644ebd1266b07608ac54125ce4be0a55df19ae0337a89715d5a7b71c158c36",
	            "SandboxKey": "/var/run/docker/netns/d8644ebd1266",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-326239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa31b1b5343ce0077ab7432095e00979a827c42de8b3b6cbea2885bebf249faf",
	                    "EndpointID": "c7f2a99f28f1ab2776d35311f6b88b250c8e144bc0014a825f0b3bc1d8107e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9a:e0:63:fc:a4:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-326239",
	                        "2b1e49e27471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-326239 -n functional-326239
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-326239 logs -n 25: (1.036723459s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1490801804/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh -- ls -la /mount-9p                                                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo umount -f /mount-9p                                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount1 --alsologtostderr -v=1                 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount2 --alsologtostderr -v=1                 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ mount      │ -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount3 --alsologtostderr -v=1                 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh findmnt -T /mount1                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh findmnt -T /mount1                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh findmnt -T /mount2                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh findmnt -T /mount3                                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ mount      │ -p functional-326239 --kill=true                                                                                                                    │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/558759.pem                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /usr/share/ca-certificates/558759.pem                                                                                │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/5587592.pem                                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /usr/share/ca-certificates/5587592.pem                                                                               │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh        │ functional-326239 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                            │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start      │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ docker-env │ functional-326239 docker-env                                                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ docker-env │ functional-326239 docker-env                                                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start      │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ start      │ -p functional-326239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0               │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ dashboard  │ --url --port 36195 -p functional-326239 --alsologtostderr -v=1                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh        │ functional-326239 ssh sudo cat /etc/test/nested/copy/558759/hosts                                                                                   │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:49.849880  648422 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:49.849988  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.849996  648422 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:49.850000  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.850222  648422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:27:49.850643  648422 out.go:368] Setting JSON to false
	I1206 09:27:49.851669  648422 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7817,"bootTime":1765005453,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:49.851731  648422 start.go:143] virtualization: kvm guest
	I1206 09:27:49.853403  648422 out.go:179] * [functional-326239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:49.854528  648422 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:49.854519  648422 notify.go:221] Checking for updates...
	I1206 09:27:49.856047  648422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:49.857413  648422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:27:49.858509  648422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:27:49.859659  648422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:49.860770  648422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:49.862286  648422 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:49.862844  648422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:49.885450  648422 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:27:49.885628  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:49.942760  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.933265972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:49.942863  648422 docker.go:319] overlay module found
	I1206 09:27:49.945331  648422 out.go:179] * Using the docker driver based on existing profile
	I1206 09:27:49.946651  648422 start.go:309] selected driver: docker
	I1206 09:27:49.946664  648422 start.go:927] validating driver "docker" against &{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:49.946748  648422 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:49.946833  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:50.002539  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.993429398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:50.003268  648422 cni.go:84] Creating CNI manager for ""
	I1206 09:27:50.003381  648422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:27:50.003433  648422 start.go:353] cluster config:
	{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:50.005091  648422 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 06 09:28:37 functional-326239 dockerd[7374]: time="2025-12-06T09:28:37.738017956Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:28:46 functional-326239 dockerd[7374]: time="2025-12-06T09:28:46.804856310Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:28:55 functional-326239 dockerd[7374]: time="2025-12-06T09:28:55.785877668Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:01 functional-326239 dockerd[7374]: time="2025-12-06T09:29:01.783690351Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:14 functional-326239 dockerd[7374]: time="2025-12-06T09:29:14.793648667Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:18 functional-326239 dockerd[7374]: time="2025-12-06T09:29:18.712628260Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:29:18 functional-326239 dockerd[7374]: time="2025-12-06T09:29:18.743697326Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:25 functional-326239 dockerd[7374]: time="2025-12-06T09:29:25.713512639Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:29:25 functional-326239 dockerd[7374]: time="2025-12-06T09:29:25.742826462Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:13 functional-326239 dockerd[7374]: time="2025-12-06T09:30:13.881378097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:13 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:30:13Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Dec 06 09:30:24 functional-326239 dockerd[7374]: time="2025-12-06T09:30:24.809413731Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:27 functional-326239 dockerd[7374]: time="2025-12-06T09:30:27.791255008Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.796185949Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.814405222Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.843920119Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:48 functional-326239 dockerd[7374]: time="2025-12-06T09:30:48.713308009Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:30:48 functional-326239 dockerd[7374]: time="2025-12-06T09:30:48.741239181Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:00 functional-326239 dockerd[7374]: time="2025-12-06T09:33:00.864570088Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:00 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:33:00Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Dec 06 09:33:05 functional-326239 dockerd[7374]: time="2025-12-06T09:33:05.787693599Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:13 functional-326239 dockerd[7374]: time="2025-12-06T09:33:13.797758574Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:27 functional-326239 dockerd[7374]: time="2025-12-06T09:33:27.710130847Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:33:27 functional-326239 dockerd[7374]: time="2025-12-06T09:33:27.739423694Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:29 functional-326239 dockerd[7374]: time="2025-12-06T09:33:29.785671239Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a0fcd81222f11       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   b0778e6b23fcb       busybox-mount                               default
	3dfef744435d8       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                         6 minutes ago       Running             nginx                     0                   e349fc4073f7d       nginx-svc                                   default
	2848af042375d       aa5e3ebc0dfed                                                                                         6 minutes ago       Running             coredns                   2                   7a2e0dfb83c2d       coredns-7d764666f9-dpsjp                    kube-system
	de7b60d3f85f2       8a4ded35a3eb1                                                                                         6 minutes ago       Running             kube-proxy                2                   3fe5270d01a93       kube-proxy-4cczw                            kube-system
	87dba194ce022       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       3                   b129f4caf585d       storage-provisioner                         kube-system
	89bcf71329c96       45f3cc72d235f                                                                                         6 minutes ago       Running             kube-controller-manager   2                   ee92ed20ec021       kube-controller-manager-functional-326239   kube-system
	2876efeddbfb0       7bb6219ddab95                                                                                         6 minutes ago       Running             kube-scheduler            2                   745e53746de0f       kube-scheduler-functional-326239            kube-system
	9650b470b5357       aa9d02839d8de                                                                                         6 minutes ago       Running             kube-apiserver            0                   946d04b945773       kube-apiserver-functional-326239            kube-system
	252ea51f29295       a3e246e9556e9                                                                                         6 minutes ago       Running             etcd                      2                   49112b9c9d6bb       etcd-functional-326239                      kube-system
	0d96bdf31a92d       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       2                   bed0374ffef7a       storage-provisioner                         kube-system
	a04216aaaa0d9       aa5e3ebc0dfed                                                                                         7 minutes ago       Exited              coredns                   1                   bf0bf6c39969e       coredns-7d764666f9-dpsjp                    kube-system
	8f00d318162af       7bb6219ddab95                                                                                         7 minutes ago       Exited              kube-scheduler            1                   c651a32ac47bc       kube-scheduler-functional-326239            kube-system
	d13bd55cfe897       a3e246e9556e9                                                                                         7 minutes ago       Exited              etcd                      1                   800264e9eef0e       etcd-functional-326239                      kube-system
	94649648cd635       8a4ded35a3eb1                                                                                         7 minutes ago       Exited              kube-proxy                1                   612f3626dc189       kube-proxy-4cczw                            kube-system
	32435a975a61d       45f3cc72d235f                                                                                         7 minutes ago       Exited              kube-controller-manager   1                   82259343f519a       kube-controller-manager-functional-326239   kube-system
	
	
	==> coredns [2848af042375] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54375 - 5356 "HINFO IN 2277724273926742442.4370476464319504986. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.442396993s
	
	
	==> coredns [a04216aaaa0d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48241 - 39332 "HINFO IN 252694381531183033.4922054418079394582. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.045627179s
	
	
	==> describe nodes <==
	Name:               functional-326239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-326239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-326239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_25_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-326239
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:33:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:31:47 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:31:47 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:31:47 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:31:47 +0000   Sat, 06 Dec 2025 09:25:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-326239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                35f8e24a-ae6f-4c51-b491-d09628d40f26
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-x4599                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  default                     hello-node-connect-9f67c86d4-zw8gz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     mysql-844cf969f6-87mns                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m39s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-7d764666f9-dpsjp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m16s
	  kube-system                 etcd-functional-326239                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m23s
	  kube-system                 kube-apiserver-functional-326239              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-326239     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-proxy-4cczw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-scheduler-functional-326239              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-sbmk5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zlv6s          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  8m17s  node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  7m15s  node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  6m28s  node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[Dec 6 09:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 81 03 c8 c4 0c 08 06
	[Dec 6 09:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 ce 4a 36 be 39 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 6e 96 20 7e 61 08 06
	[Dec 6 09:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000002] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8c d8 e5 b5 d0 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff fe d1 60 dc a1 8a 08 06
	[Dec 6 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 c0 1c 2b e1 b2 08 06
	[Dec 6 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 33 29 de 35 84 08 06
	
	
	==> etcd [252ea51f2929] <==
	{"level":"warn","ts":"2025-12-06T09:27:00.446427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.453173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.459906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.466081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.472849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.487722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.494784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.501928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.508583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.515262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.522039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.528572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.535363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.547138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.553749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.562415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.569045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.575692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.590169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.596750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.604034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.611260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.618192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.658167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.708674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	
	
	==> etcd [d13bd55cfe89] <==
	{"level":"warn","ts":"2025-12-06T09:26:13.711985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.719248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.728349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.734991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.741886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.748836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.755421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.761648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.768049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.775806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.787179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.794102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.801937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.809234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.817575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.824368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.832430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.840229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.846990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.854824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.862270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.879470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.892726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.899797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.947416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:32 up  2:15,  0 user,  load average: 0.13, 0.28, 0.70
	Linux functional-326239 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [9650b470b535] <==
	I1206 09:27:01.162140       1 cache.go:39] Caches are synced for autoregister controller
	I1206 09:27:01.165264       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:27:01.167086       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:27:01.167632       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1206 09:27:01.168251       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:27:01.171741       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:01.171760       1 policy_source.go:248] refreshing policies
	I1206 09:27:01.185414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:27:01.792711       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:27:02.069434       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:27:02.847290       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:27:02.882960       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:27:02.913018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:27:02.921097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:27:04.549778       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:27:04.648419       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:27:20.089722       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.126.45"}
	I1206 09:27:25.680231       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.95.210"}
	I1206 09:27:26.265885       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:27:26.342843       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.7.69"}
	I1206 09:27:29.303777       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.78.28"}
	I1206 09:27:50.824501       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:27:50.924501       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.177.225"}
	I1206 09:27:50.933691       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.141.242"}
	I1206 09:27:53.497457       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.111.90"}
	
	
	==> kube-controller-manager [32435a975a61] <==
	I1206 09:26:17.543338       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542791       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543445       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:26:17.543505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:26:17.543511       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.543517       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543584       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542776       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543602       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544476       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544653       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544828       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542651       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544968       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545005       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545183       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545874       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542799       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.551452       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.555386       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.643251       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.643285       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:26:17.643293       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:26:17.655982       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [89bcf71329c9] <==
	I1206 09:27:04.304600       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304860       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304948       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304976       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305159       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305209       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:27:04.305279       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305286       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-326239"
	I1206 09:27:04.305372       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1206 09:27:04.305634       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305646       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305679       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305765       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306019       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306689       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306950       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:04.403997       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.404025       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:27:04.404030       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:27:04.407939       1 shared_informer.go:377] "Caches are synced"
	E1206 09:27:50.867219       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.873234       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.879428       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.880773       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.883077       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [94649648cd63] <==
	I1206 09:26:12.602160       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:26:12.674751       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.377086       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.377355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:26:14.377596       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:26:14.416940       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:26:14.417016       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:26:14.423977       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:26:14.425728       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:26:14.425755       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.427928       1 config.go:200] "Starting service config controller"
	I1206 09:26:14.427960       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:26:14.428012       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:26:14.428023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:26:14.428030       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:26:14.428043       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:26:14.428068       1 config.go:309] "Starting node config controller"
	I1206 09:26:14.428083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:26:14.428090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:26:14.529013       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:26:14.529035       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:26:14.529051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [de7b60d3f85f] <==
	I1206 09:27:02.323363       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:27:02.393893       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:02.494373       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:02.494413       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:27:02.494497       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:27:02.516848       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:27:02.516936       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:27:02.522473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:27:02.522825       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:27:02.522848       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:02.524250       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:27:02.524275       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:27:02.524276       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:27:02.524301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:27:02.524280       1 config.go:200] "Starting service config controller"
	I1206 09:27:02.524351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:27:02.524372       1 config.go:309] "Starting node config controller"
	I1206 09:27:02.524378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:27:02.624406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:27:02.624426       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:27:02.624452       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:27:02.624569       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2876efeddbfb] <==
	I1206 09:26:59.727097       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:27:01.087550       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:27:01.087780       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:27:01.087813       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:27:01.087823       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:27:01.107671       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:27:01.107709       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:01.110792       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:27:01.110819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:27:01.110831       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:01.111018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:27:01.211217       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [8f00d318162a] <==
	I1206 09:26:13.180671       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:26:14.316173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:26:14.316208       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:26:14.316219       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:26:14.316229       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:26:14.355000       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:26:14.355040       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.360961       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:26:14.361136       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:26:14.361154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:26:14.361969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.462893       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:33:09 functional-326239 kubelet[8745]: E1206 09:33:09.696578    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:33:13 functional-326239 kubelet[8745]: E1206 09:33:13.800273    8745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:33:13 functional-326239 kubelet[8745]: E1206 09:33:13.800338    8745 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 06 09:33:13 functional-326239 kubelet[8745]: E1206 09:33:13.800592    8745 kuberuntime_manager.go:1664] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(8de345de-964f-44a8-9994-19eb0772df93): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:33:13 functional-326239 kubelet[8745]: E1206 09:33:13.800639    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:33:15 functional-326239 kubelet[8745]: E1206 09:33:15.696311    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	Dec 06 09:33:16 functional-326239 kubelet[8745]: E1206 09:33:16.694183    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:33:16 functional-326239 kubelet[8745]: E1206 09:33:16.694770    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-x4599" podUID="47d3fefa-9586-4712-914d-c9afc666299e"
	Dec 06 09:33:16 functional-326239 kubelet[8745]: E1206 09:33:16.696866    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:33:17 functional-326239 kubelet[8745]: E1206 09:33:17.694303    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:33:23 functional-326239 kubelet[8745]: E1206 09:33:23.694382    8745 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-dpsjp" containerName="coredns"
	Dec 06 09:33:23 functional-326239 kubelet[8745]: E1206 09:33:23.694452    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" containerName="kubernetes-dashboard"
	Dec 06 09:33:23 functional-326239 kubelet[8745]: E1206 09:33:23.696746    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:33:27 functional-326239 kubelet[8745]: E1206 09:33:27.694489    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:33:27 functional-326239 kubelet[8745]: E1206 09:33:27.741647    8745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:33:27 functional-326239 kubelet[8745]: E1206 09:33:27.741701    8745 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:33:27 functional-326239 kubelet[8745]: E1206 09:33:27.741890    8745 kuberuntime_manager.go:1664] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-5565989548-sbmk5_kubernetes-dashboard(27202e3f-a7f1-4a1d-8885-3705d48bb1b7): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:33:27 functional-326239 kubelet[8745]: E1206 09:33:27.741957    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:33:28 functional-326239 kubelet[8745]: E1206 09:33:28.694936    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:33:29 functional-326239 kubelet[8745]: E1206 09:33:29.694577    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:33:29 functional-326239 kubelet[8745]: E1206 09:33:29.694828    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-x4599" podUID="47d3fefa-9586-4712-914d-c9afc666299e"
	Dec 06 09:33:29 functional-326239 kubelet[8745]: E1206 09:33:29.788234    8745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 06 09:33:29 functional-326239 kubelet[8745]: E1206 09:33:29.788287    8745 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 06 09:33:29 functional-326239 kubelet[8745]: E1206 09:33:29.788464    8745 kuberuntime_manager.go:1664] "Unhandled Error" err="container mysql start failed in pod mysql-844cf969f6-87mns_default(228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 06 09:33:29 functional-326239 kubelet[8745]: E1206 09:33:29.788494    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	
	
	==> storage-provisioner [0d96bdf31a92] <==
	I1206 09:26:24.816621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:26:24.816671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:26:24.818753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:28.274068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:32.535027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:36.133306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:39.189800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.212252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.218505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.218741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:26:42.218864       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7a2486b-50c5-43aa-87c6-fe9171bc66e3", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4 became leader
	I1206 09:26:42.218959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:42.220599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.223472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.320091       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:44.227019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:44.230865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.234176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.238415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.242019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.246112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.249752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.254841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.258225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.262127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [87dba194ce02] <==
	W1206 09:33:09.005868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:11.008874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:11.012640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:13.015768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:13.019798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:15.022966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:15.028476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:17.031160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:17.035130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:19.037806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:19.041472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:21.044713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:21.048616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:23.051452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:23.056424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:25.059519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:25.063533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:27.066600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:27.070385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:29.073969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:29.077776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:31.080529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:31.084322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:33.087839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:33:33.092237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
helpers_test.go:269: (dbg) Run:  kubectl --context functional-326239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1 (93.582923ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://a0fcd81222f118539b5967330da5243f390d47260cea6ccca50207c84ffeab6c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:27:39 +0000
	      Finished:     Sat, 06 Dec 2025 09:27:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gpqn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5gpqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m56s  default-scheduler  Successfully assigned default/busybox-mount to functional-326239
	  Normal  Pulling    5m55s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m54s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.363s (1.363s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m54s  kubelet            Container created
	  Normal  Started    5m54s  kubelet            Container started
	
	
	Name:             hello-node-5758569b79-x4599
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ldsr2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ldsr2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m4s                   default-scheduler  Successfully assigned default/hello-node-5758569b79-x4599 to functional-326239
	  Warning  Failed     4m47s (x3 over 5m52s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m20s (x5 over 6m4s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m20s (x2 over 6m4s)   kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m20s (x5 over 6m4s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    60s (x21 over 6m3s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     60s (x21 over 6m3s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-zw8gz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:26 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2872p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2872p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m7s                 default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zw8gz to functional-326239
	  Normal   Pulling    3m9s (x5 over 6m7s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     3m9s (x5 over 6m6s)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m9s (x5 over 6m6s)  kubelet            Error: ErrImagePull
	  Warning  Failed     58s (x20 over 6m5s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    43s (x21 over 6m5s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             mysql-844cf969f6-87mns
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:53 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:           10.244.0.16
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxj6s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xxj6s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m40s                  default-scheduler  Successfully assigned default/mysql-844cf969f6-87mns to functional-326239
	  Normal   Pulling    2m47s (x5 over 5m39s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m47s (x5 over 5m39s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m47s (x5 over 5m39s)  kubelet            Error: ErrImagePull
	  Warning  Failed     33s (x20 over 5m39s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    18s (x21 over 5m39s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6q4p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-w6q4p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-326239
	  Normal   Pulling    3m6s (x5 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m6s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m6s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    57s (x21 over 6m1s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     57s (x21 over 6m1s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-sbmk5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-zlv6s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (368.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-326239 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-87mns" [228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1206 09:28:23.160824  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:49.803088  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:49.809496  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:49.820953  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:49.842325  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:49.883818  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:49.965337  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:50.127140  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:50.449184  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:51.090558  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:52.372457  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:29:54.934457  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:30:00.056010  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:30:10.298026  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:30:30.779756  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:31:11.741667  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:00.097568  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:32:33.663222  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
functional_test.go:1804: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: showing logs for failed pods as of 2025-12-06 09:37:53.849706325 +0000 UTC m=+2259.556345974
functional_test.go:1804: (dbg) Run:  kubectl --context functional-326239 describe po mysql-844cf969f6-87mns -n default
functional_test.go:1804: (dbg) kubectl --context functional-326239 describe po mysql-844cf969f6-87mns -n default:
Name:             mysql-844cf969f6-87mns
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326239/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:27:53 +0000
Labels:           app=mysql
pod-template-hash=844cf969f6
Annotations:      <none>
Status:           Pending
IP:               10.244.0.16
IPs:
IP:           10.244.0.16
Controlled By:  ReplicaSet/mysql-844cf969f6
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxj6s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-xxj6s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-844cf969f6-87mns to functional-326239
Normal   Pulling    7m7s (x5 over 9m59s)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m7s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m38s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-326239 logs mysql-844cf969f6-87mns -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-326239 logs mysql-844cf969f6-87mns -n default: exit status 1 (58.957038ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-844cf969f6-87mns" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-326239 logs mysql-844cf969f6-87mns -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-326239
helpers_test.go:243: (dbg) docker inspect functional-326239:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	        "Created": "2025-12-06T09:24:58.563992644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 627619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T09:24:58.603754235Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bc8026154dd65da61b914564a2888a4ef870360162bd8e45b8c6d537ab6c86c0",
	        "ResolvConfPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/hosts",
	        "LogPath": "/var/lib/docker/containers/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b/2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b-json.log",
	        "Name": "/functional-326239",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-326239:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-326239",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2b1e49e27471792e1b27b65e9565fce215a1d670722342eb02ba3b3bfc4db42b",
	                "LowerDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027-init/diff:/var/lib/docker/overlay2/e436edcb7322c840f879b3c5d1d6403a3125a1711763277d84155a12f01e0462/diff",
	                "MergedDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/merged",
	                "UpperDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/diff",
	                "WorkDir": "/var/lib/docker/overlay2/067cc226170adb67b087cf04db318db0b550cdcb164c92664647a4f2afba8027/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-326239",
	                "Source": "/var/lib/docker/volumes/functional-326239/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-326239",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-326239",
	                "name.minikube.sigs.k8s.io": "functional-326239",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d8644ebd1266b07608ac54125ce4be0a55df19ae0337a89715d5a7b71c158c36",
	            "SandboxKey": "/var/run/docker/netns/d8644ebd1266",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-326239": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fa31b1b5343ce0077ab7432095e00979a827c42de8b3b6cbea2885bebf249faf",
	                    "EndpointID": "c7f2a99f28f1ab2776d35311f6b88b250c8e144bc0014a825f0b3bc1d8107e4c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "9a:e0:63:fc:a4:9f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-326239",
	                        "2b1e49e27471"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-326239 -n functional-326239
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                      ARGS                                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-326239 ssh sudo cat /etc/ssl/certs/5587592.pem                                                                                       │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh            │ functional-326239 ssh sudo cat /usr/share/ca-certificates/5587592.pem                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ ssh            │ functional-326239 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                        │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start          │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ docker-env     │ functional-326239 docker-env                                                                                                                    │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ docker-env     │ functional-326239 docker-env                                                                                                                    │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ start          │ -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0 │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ start          │ -p functional-326239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-326239 --alsologtostderr -v=1                                                                                  │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │                     │
	│ ssh            │ functional-326239 ssh sudo cat /etc/test/nested/copy/558759/hosts                                                                               │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:27 UTC │ 06 Dec 25 09:27 UTC │
	│ image          │ functional-326239 image ls --format short --alsologtostderr                                                                                     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls --format yaml --alsologtostderr                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ ssh            │ functional-326239 ssh pgrep buildkitd                                                                                                           │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │                     │
	│ image          │ functional-326239 image build -t localhost/my-image:functional-326239 testdata/build --alsologtostderr                                          │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls                                                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls --format json --alsologtostderr                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ image          │ functional-326239 image ls --format table --alsologtostderr                                                                                     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-326239 update-context --alsologtostderr -v=2                                                                                         │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-326239 update-context --alsologtostderr -v=2                                                                                         │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ update-context │ functional-326239 update-context --alsologtostderr -v=2                                                                                         │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:33 UTC │ 06 Dec 25 09:33 UTC │
	│ service        │ functional-326239 service list                                                                                                                  │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:37 UTC │ 06 Dec 25 09:37 UTC │
	│ service        │ functional-326239 service list -o json                                                                                                          │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:37 UTC │ 06 Dec 25 09:37 UTC │
	│ service        │ functional-326239 service --namespace=default --https --url hello-node                                                                          │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:37 UTC │                     │
	│ service        │ functional-326239 service hello-node --url --format={{.IP}}                                                                                     │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:37 UTC │                     │
	│ service        │ functional-326239 service hello-node --url                                                                                                      │ functional-326239 │ jenkins │ v1.37.0 │ 06 Dec 25 09:37 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:27:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:27:49.849880  648422 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:49.849988  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.849996  648422 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:49.850000  648422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.850222  648422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:27:49.850643  648422 out.go:368] Setting JSON to false
	I1206 09:27:49.851669  648422 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7817,"bootTime":1765005453,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:49.851731  648422 start.go:143] virtualization: kvm guest
	I1206 09:27:49.853403  648422 out.go:179] * [functional-326239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:49.854528  648422 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:49.854519  648422 notify.go:221] Checking for updates...
	I1206 09:27:49.856047  648422 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:49.857413  648422 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:27:49.858509  648422 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:27:49.859659  648422 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:49.860770  648422 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:49.862286  648422 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:49.862844  648422 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:49.885450  648422 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:27:49.885628  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:49.942760  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.933265972 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:49.942863  648422 docker.go:319] overlay module found
	I1206 09:27:49.945331  648422 out.go:179] * Using the docker driver based on existing profile
	I1206 09:27:49.946651  648422 start.go:309] selected driver: docker
	I1206 09:27:49.946664  648422 start.go:927] validating driver "docker" against &{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:49.946748  648422 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:49.946833  648422 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:50.002539  648422 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.993429398 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:50.003268  648422 cni.go:84] Creating CNI manager for ""
	I1206 09:27:50.003381  648422 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1206 09:27:50.003433  648422 start.go:353] cluster config:
	{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:50.005091  648422 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 06 09:29:14 functional-326239 dockerd[7374]: time="2025-12-06T09:29:14.793648667Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:18 functional-326239 dockerd[7374]: time="2025-12-06T09:29:18.712628260Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:29:18 functional-326239 dockerd[7374]: time="2025-12-06T09:29:18.743697326Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:29:25 functional-326239 dockerd[7374]: time="2025-12-06T09:29:25.713512639Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:29:25 functional-326239 dockerd[7374]: time="2025-12-06T09:29:25.742826462Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:13 functional-326239 dockerd[7374]: time="2025-12-06T09:30:13.881378097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:13 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:30:13Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Dec 06 09:30:24 functional-326239 dockerd[7374]: time="2025-12-06T09:30:24.809413731Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:27 functional-326239 dockerd[7374]: time="2025-12-06T09:30:27.791255008Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.796185949Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.814405222Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:30:46 functional-326239 dockerd[7374]: time="2025-12-06T09:30:46.843920119Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:30:48 functional-326239 dockerd[7374]: time="2025-12-06T09:30:48.713308009Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:30:48 functional-326239 dockerd[7374]: time="2025-12-06T09:30:48.741239181Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:00 functional-326239 dockerd[7374]: time="2025-12-06T09:33:00.864570088Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:00 functional-326239 cri-dockerd[7694]: time="2025-12-06T09:33:00Z" level=info msg="Stop pulling image kicbase/echo-server:latest: latest: Pulling from kicbase/echo-server"
	Dec 06 09:33:05 functional-326239 dockerd[7374]: time="2025-12-06T09:33:05.787693599Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:13 functional-326239 dockerd[7374]: time="2025-12-06T09:33:13.797758574Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:27 functional-326239 dockerd[7374]: time="2025-12-06T09:33:27.710130847Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Dec 06 09:33:27 functional-326239 dockerd[7374]: time="2025-12-06T09:33:27.739423694Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:29 functional-326239 dockerd[7374]: time="2025-12-06T09:33:29.785671239Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:35 functional-326239 dockerd[7374]: 2025/12/06 09:33:35 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 06 09:33:36 functional-326239 dockerd[7374]: time="2025-12-06T09:33:36.718549754Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Dec 06 09:33:36 functional-326239 dockerd[7374]: time="2025-12-06T09:33:36.750032252Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Dec 06 09:33:37 functional-326239 dockerd[7374]: time="2025-12-06T09:33:37.176798777Z" level=info msg="sbJoin: gwep4 ''->'9364f6c5debd', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a0fcd81222f11       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   b0778e6b23fcb       busybox-mount                               default
	3dfef744435d8       nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14                         10 minutes ago      Running             nginx                     0                   e349fc4073f7d       nginx-svc                                   default
	2848af042375d       aa5e3ebc0dfed                                                                                         10 minutes ago      Running             coredns                   2                   7a2e0dfb83c2d       coredns-7d764666f9-dpsjp                    kube-system
	de7b60d3f85f2       8a4ded35a3eb1                                                                                         10 minutes ago      Running             kube-proxy                2                   3fe5270d01a93       kube-proxy-4cczw                            kube-system
	87dba194ce022       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       3                   b129f4caf585d       storage-provisioner                         kube-system
	89bcf71329c96       45f3cc72d235f                                                                                         10 minutes ago      Running             kube-controller-manager   2                   ee92ed20ec021       kube-controller-manager-functional-326239   kube-system
	2876efeddbfb0       7bb6219ddab95                                                                                         10 minutes ago      Running             kube-scheduler            2                   745e53746de0f       kube-scheduler-functional-326239            kube-system
	9650b470b5357       aa9d02839d8de                                                                                         10 minutes ago      Running             kube-apiserver            0                   946d04b945773       kube-apiserver-functional-326239            kube-system
	252ea51f29295       a3e246e9556e9                                                                                         10 minutes ago      Running             etcd                      2                   49112b9c9d6bb       etcd-functional-326239                      kube-system
	0d96bdf31a92d       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       2                   bed0374ffef7a       storage-provisioner                         kube-system
	a04216aaaa0d9       aa5e3ebc0dfed                                                                                         11 minutes ago      Exited              coredns                   1                   bf0bf6c39969e       coredns-7d764666f9-dpsjp                    kube-system
	8f00d318162af       7bb6219ddab95                                                                                         11 minutes ago      Exited              kube-scheduler            1                   c651a32ac47bc       kube-scheduler-functional-326239            kube-system
	d13bd55cfe897       a3e246e9556e9                                                                                         11 minutes ago      Exited              etcd                      1                   800264e9eef0e       etcd-functional-326239                      kube-system
	94649648cd635       8a4ded35a3eb1                                                                                         11 minutes ago      Exited              kube-proxy                1                   612f3626dc189       kube-proxy-4cczw                            kube-system
	32435a975a61d       45f3cc72d235f                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   82259343f519a       kube-controller-manager-functional-326239   kube-system
	
	
	==> coredns [2848af042375] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:54375 - 5356 "HINFO IN 2277724273926742442.4370476464319504986. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.442396993s
	
	
	==> coredns [a04216aaaa0d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:48241 - 39332 "HINFO IN 252694381531183033.4922054418079394582. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.045627179s
	
	
	==> describe nodes <==
	Name:               functional-326239
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-326239
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=functional-326239
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T09_25_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 09:25:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-326239
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 09:37:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 09:34:00 +0000   Sat, 06 Dec 2025 09:25:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-326239
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6e66d6047cad46f36f1a6e369316001
	  System UUID:                35f8e24a-ae6f-4c51-b491-d09628d40f26
	  Boot ID:                    41ef56f7-de94-4c23-8e93-ec48e4e68466
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://29.1.2
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-x4599                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-9f67c86d4-zw8gz            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-844cf969f6-87mns                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7d764666f9-dpsjp                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-326239                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-326239              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-326239     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4cczw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-326239              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-sbmk5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-zlv6s          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  12m   node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  11m   node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	  Normal  RegisteredNode  10m   node-controller  Node functional-326239 event: Registered Node functional-326239 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[  +1.299675] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 16 c7 72 c4 93 08 06
	[  +0.000525] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[Dec 6 09:03] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 0a 3f cc d3 d9 16 08 06
	[  +0.000633] IPv4: martian source 10.244.0.32 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 9e 66 ad 97 5d 08 06
	[  +0.000768] IPv4: martian source 10.244.0.32 from 10.244.0.6, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff b2 e8 4d 60 a8 94 08 06
	[Dec 6 09:12] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 42 81 03 c8 c4 0c 08 06
	[Dec 6 09:13] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 ce 4a 36 be 39 08 06
	[Dec 6 09:14] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 6e 96 20 7e 61 08 06
	[Dec 6 09:25] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000002] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 8c d8 e5 b5 d0 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff fe d1 60 dc a1 8a 08 06
	[Dec 6 09:26] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 c0 1c 2b e1 b2 08 06
	[Dec 6 09:27] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 46 33 29 de 35 84 08 06
	
	
	==> etcd [252ea51f2929] <==
	{"level":"warn","ts":"2025-12-06T09:27:00.466081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.472849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.487722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.494784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.501928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.508583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.515262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.522039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.528572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.535363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.547138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.553749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.562415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.569045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.575692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.590169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.596750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.604034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.611260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.618192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.658167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:27:00.708674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35742","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T09:37:00.196870Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1414}
	{"level":"info","ts":"2025-12-06T09:37:00.217718Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1414,"took":"20.421611ms","hash":615152118,"current-db-size-bytes":4030464,"current-db-size":"4.0 MB","current-db-size-in-use-bytes":2154496,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-12-06T09:37:00.217773Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":615152118,"revision":1414,"compact-revision":-1}
	
	
	==> etcd [d13bd55cfe89] <==
	{"level":"warn","ts":"2025-12-06T09:26:13.711985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.719248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.728349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.734991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.741886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.748836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.755421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.761648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.768049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.775806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.787179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.794102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.801937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.809234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.817575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.824368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.832430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.840229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.846990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.854824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.862270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.879470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.892726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.899797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T09:26:13.947416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53200","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:37:55 up  2:20,  0 user,  load average: 0.74, 0.35, 0.60
	Linux functional-326239 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [9650b470b535] <==
	I1206 09:27:01.165264       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1206 09:27:01.167086       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 09:27:01.167632       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	E1206 09:27:01.168251       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 09:27:01.171741       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:01.171760       1 policy_source.go:248] refreshing policies
	I1206 09:27:01.185414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 09:27:01.792711       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 09:27:02.069434       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1206 09:27:02.847290       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1206 09:27:02.882960       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1206 09:27:02.913018       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1206 09:27:02.921097       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1206 09:27:04.549778       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 09:27:04.648419       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 09:27:20.089722       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.126.45"}
	I1206 09:27:25.680231       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.95.210"}
	I1206 09:27:26.265885       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 09:27:26.342843       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.7.69"}
	I1206 09:27:29.303777       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.78.28"}
	I1206 09:27:50.824501       1 controller.go:667] quota admission added evaluator for: namespaces
	I1206 09:27:50.924501       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.177.225"}
	I1206 09:27:50.933691       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.141.242"}
	I1206 09:27:53.497457       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.97.111.90"}
	I1206 09:37:01.092237       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [32435a975a61] <==
	I1206 09:26:17.543338       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542791       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543445       1 range_allocator.go:177] "Sending events to api server"
	I1206 09:26:17.543505       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1206 09:26:17.543511       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.543517       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543584       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542776       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.543602       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544476       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544596       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544653       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544828       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542651       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.544968       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545005       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545183       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.545874       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.542799       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.551452       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.555386       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:17.643251       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:17.643285       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:26:17.643293       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:26:17.655982       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [89bcf71329c9] <==
	I1206 09:27:04.304600       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304860       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304948       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.304976       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305159       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305209       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1206 09:27:04.305279       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305286       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-326239"
	I1206 09:27:04.305372       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1206 09:27:04.305634       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305646       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305679       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.305765       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306019       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306689       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.306950       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:04.403997       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:04.404025       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1206 09:27:04.404030       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1206 09:27:04.407939       1 shared_informer.go:377] "Caches are synced"
	E1206 09:27:50.867219       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.873234       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.879428       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.880773       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1206 09:27:50.883077       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [94649648cd63] <==
	I1206 09:26:12.602160       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:26:12.674751       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.377086       1 shared_informer.go:377] "Caches are synced"
	I1206 09:26:14.377355       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:26:14.377596       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:26:14.416940       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:26:14.417016       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:26:14.423977       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:26:14.425728       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:26:14.425755       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.427928       1 config.go:200] "Starting service config controller"
	I1206 09:26:14.427960       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:26:14.428012       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:26:14.428023       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:26:14.428030       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:26:14.428043       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:26:14.428068       1 config.go:309] "Starting node config controller"
	I1206 09:26:14.428083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:26:14.428090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:26:14.529013       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:26:14.529035       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:26:14.529051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [de7b60d3f85f] <==
	I1206 09:27:02.323363       1 server_linux.go:53] "Using iptables proxy"
	I1206 09:27:02.393893       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:02.494373       1 shared_informer.go:377] "Caches are synced"
	I1206 09:27:02.494413       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 09:27:02.494497       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 09:27:02.516848       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 09:27:02.516936       1 server_linux.go:136] "Using iptables Proxier"
	I1206 09:27:02.522473       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 09:27:02.522825       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1206 09:27:02.522848       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:02.524250       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 09:27:02.524275       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 09:27:02.524276       1 config.go:106] "Starting endpoint slice config controller"
	I1206 09:27:02.524301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 09:27:02.524280       1 config.go:200] "Starting service config controller"
	I1206 09:27:02.524351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 09:27:02.524372       1 config.go:309] "Starting node config controller"
	I1206 09:27:02.524378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 09:27:02.624406       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 09:27:02.624426       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 09:27:02.624452       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 09:27:02.624569       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2876efeddbfb] <==
	I1206 09:26:59.727097       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:27:01.087550       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:27:01.087780       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:27:01.087813       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:27:01.087823       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:27:01.107671       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:27:01.107709       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:27:01.110792       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:27:01.110819       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:27:01.110831       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:27:01.111018       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:27:01.211217       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [8f00d318162a] <==
	I1206 09:26:13.180671       1 serving.go:386] Generated self-signed cert in-memory
	W1206 09:26:14.316173       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1206 09:26:14.316208       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1206 09:26:14.316219       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1206 09:26:14.316229       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1206 09:26:14.355000       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1206 09:26:14.355040       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 09:26:14.360961       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 09:26:14.361136       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 09:26:14.361154       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 09:26:14.361969       1 shared_informer.go:370] "Waiting for caches to sync"
	I1206 09:26:14.462893       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 06 09:37:15 functional-326239 kubelet[8745]: E1206 09:37:15.695771    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:37:19 functional-326239 kubelet[8745]: E1206 09:37:19.694395    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:37:19 functional-326239 kubelet[8745]: E1206 09:37:19.694626    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:37:25 functional-326239 kubelet[8745]: E1206 09:37:25.694630    8745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-functional-326239" containerName="kube-controller-manager"
	Dec 06 09:37:26 functional-326239 kubelet[8745]: E1206 09:37:26.694698    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:37:26 functional-326239 kubelet[8745]: E1206 09:37:26.697228    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:37:26 functional-326239 kubelet[8745]: E1206 09:37:26.697564    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	Dec 06 09:37:28 functional-326239 kubelet[8745]: E1206 09:37:28.694844    8745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-functional-326239" containerName="kube-apiserver"
	Dec 06 09:37:29 functional-326239 kubelet[8745]: E1206 09:37:29.693971    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-x4599" podUID="47d3fefa-9586-4712-914d-c9afc666299e"
	Dec 06 09:37:30 functional-326239 kubelet[8745]: E1206 09:37:30.693960    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" containerName="kubernetes-dashboard"
	Dec 06 09:37:30 functional-326239 kubelet[8745]: E1206 09:37:30.696517    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:37:31 functional-326239 kubelet[8745]: E1206 09:37:31.694485    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:37:34 functional-326239 kubelet[8745]: E1206 09:37:34.695001    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:37:41 functional-326239 kubelet[8745]: E1206 09:37:41.694117    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:37:41 functional-326239 kubelet[8745]: E1206 09:37:41.698136    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	Dec 06 09:37:41 functional-326239 kubelet[8745]: E1206 09:37:41.698605    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	Dec 06 09:37:43 functional-326239 kubelet[8745]: E1206 09:37:43.694466    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-5758569b79-x4599" podUID="47d3fefa-9586-4712-914d-c9afc666299e"
	Dec 06 09:37:45 functional-326239 kubelet[8745]: E1206 09:37:45.694160    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" containerName="kubernetes-dashboard"
	Dec 06 09:37:45 functional-326239 kubelet[8745]: E1206 09:37:45.696617    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-zlv6s" podUID="ac224612-ccb6-4df5-8fdb-c2360339af04"
	Dec 06 09:37:46 functional-326239 kubelet[8745]: E1206 09:37:46.694879    8745 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-326239" containerName="kube-scheduler"
	Dec 06 09:37:46 functional-326239 kubelet[8745]: E1206 09:37:46.695133    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8de345de-964f-44a8-9994-19eb0772df93"
	Dec 06 09:37:49 functional-326239 kubelet[8745]: E1206 09:37:49.694421    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-9f67c86d4-zw8gz" podUID="fd8e4b13-e45a-40be-9fa0-1e7579b8d00f"
	Dec 06 09:37:54 functional-326239 kubelet[8745]: E1206 09:37:54.694355    8745 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" containerName="dashboard-metrics-scraper"
	Dec 06 09:37:54 functional-326239 kubelet[8745]: E1206 09:37:54.696680    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-844cf969f6-87mns" podUID="228dd7da-28bd-4f2a-a53d-ca0ef6a0d7c8"
	Dec 06 09:37:54 functional-326239 kubelet[8745]: E1206 09:37:54.697021    8745 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-sbmk5" podUID="27202e3f-a7f1-4a1d-8885-3705d48bb1b7"
	
	
	==> storage-provisioner [0d96bdf31a92] <==
	I1206 09:26:24.816621       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1206 09:26:24.816671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1206 09:26:24.818753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:28.274068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:32.535027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:36.133306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:39.189800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.212252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.218505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.218741       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1206 09:26:42.218864       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7a2486b-50c5-43aa-87c6-fe9171bc66e3", APIVersion:"v1", ResourceVersion:"553", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4 became leader
	I1206 09:26:42.218959       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:42.220599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:42.223472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1206 09:26:42.320091       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-326239_06d988fb-59e4-447c-94c6-ab6bc5c0a7f4!
	W1206 09:26:44.227019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:44.230865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.234176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:46.238415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.242019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:48.246112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.249752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:50.254841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.258225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:26:52.262127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [87dba194ce02] <==
	W1206 09:37:29.992715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:31.995194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:31.999226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:34.002374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:34.006646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:36.010562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:36.015846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:38.018614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:38.023254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:40.026713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:40.031104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:42.034132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:42.039665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:44.042961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:44.047807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:46.051677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:46.056706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:48.060382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:48.065316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:50.068885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:50.074493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:52.077104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:52.082062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:54.085080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 09:37:54.091275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
helpers_test.go:269: (dbg) Run:  kubectl --context functional-326239 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1 (88.060343ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:37 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://a0fcd81222f118539b5967330da5243f390d47260cea6ccca50207c84ffeab6c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 06 Dec 2025 09:27:39 +0000
	      Finished:     Sat, 06 Dec 2025 09:27:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5gpqn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5gpqn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-326239
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.363s (1.363s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Container created
	  Normal  Started    10m   kubelet            Container started
	
	
	Name:             hello-node-5758569b79-x4599
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:29 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ldsr2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ldsr2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-5758569b79-x4599 to functional-326239
	  Warning  Failed     9m9s (x3 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m42s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m42s (x2 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m42s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x44 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     12s (x44 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-9f67c86d4-zw8gz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:26 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=9f67c86d4
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-9f67c86d4
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2872p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2872p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-9f67c86d4-zw8gz to functional-326239
	  Normal   Pulling    7m31s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m31s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m31s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    21s (x41 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     21s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             mysql-844cf969f6-87mns
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:53 +0000
	Labels:           app=mysql
	                  pod-template-hash=844cf969f6
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.16
	IPs:
	  IP:           10.244.0.16
	Controlled By:  ReplicaSet/mysql-844cf969f6
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xxj6s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xxj6s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/mysql-844cf969f6-87mns to functional-326239
	  Normal   Pulling    7m9s (x5 over 10m)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m9s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x41 over 10m)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     1s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-326239/192.168.49.2
	Start Time:       Sat, 06 Dec 2025 09:27:31 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6q4p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-w6q4p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-326239
	  Normal   Pulling    7m28s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m28s (x5 over 10m)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m28s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x44 over 10m)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x44 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-5565989548-sbmk5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-zlv6s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-326239 describe pod busybox-mount hello-node-5758569b79-x4599 hello-node-connect-9f67c86d4-zw8gz mysql-844cf969f6-87mns sp-pod dashboard-metrics-scraper-5565989548-sbmk5 kubernetes-dashboard-b84665fb8-zlv6s: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (602.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-326239 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-326239 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-x4599" [47d3fefa-9586-4712-914d-c9afc666299e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-326239 -n functional-326239
functional_test.go:1460: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-12-06 09:37:29.624446014 +0000 UTC m=+2235.331085664
functional_test.go:1460: (dbg) Run:  kubectl --context functional-326239 describe po hello-node-5758569b79-x4599 -n default
functional_test.go:1460: (dbg) kubectl --context functional-326239 describe po hello-node-5758569b79-x4599 -n default:
Name:             hello-node-5758569b79-x4599
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-326239/192.168.49.2
Start Time:       Sat, 06 Dec 2025 09:27:29 +0000
Labels:           app=hello-node
pod-template-hash=5758569b79
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-5758569b79
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ldsr2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ldsr2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-5758569b79-x4599 to functional-326239
Warning  Failed     8m43s (x3 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m16s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m16s (x2 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m16s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-326239 logs hello-node-5758569b79-x4599 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-326239 logs hello-node-5758569b79-x4599 -n default: exit status 1 (61.567271ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-5758569b79-x4599" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-326239 logs hello-node-5758569b79-x4599 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 service --namespace=default --https --url hello-node: exit status 115 (544.350484ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31389
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-326239 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 service hello-node --url --format={{.IP}}: exit status 115 (554.10196ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-326239 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 service hello-node --url: exit status 115 (543.706285ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31389
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-326239 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31389
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.54s)

                                                
                                    

Test pass (390/434)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 3.91
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 2.41
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.22
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.35.0-beta.0/json-events 2.76
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.41
30 TestBinaryMirror 0.81
31 TestOffline 81.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 93.61
38 TestAddons/serial/Volcano 39.24
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 10.51
44 TestAddons/parallel/Registry 15.42
45 TestAddons/parallel/RegistryCreds 0.66
47 TestAddons/parallel/InspektorGadget 10.66
48 TestAddons/parallel/MetricsServer 5.62
50 TestAddons/parallel/CSI 40.1
51 TestAddons/parallel/Headlamp 16.44
52 TestAddons/parallel/CloudSpanner 5.56
54 TestAddons/parallel/NvidiaDevicePlugin 6.45
55 TestAddons/parallel/Yakd 11.67
56 TestAddons/parallel/AmdGpuDevicePlugin 5.45
57 TestAddons/StoppedEnableDisable 11.28
58 TestCertOptions 27.89
59 TestCertExpiration 236.43
60 TestDockerFlags 38.9
61 TestForceSystemdFlag 29.86
62 TestForceSystemdEnv 28.52
67 TestErrorSpam/setup 24.62
68 TestErrorSpam/start 0.69
69 TestErrorSpam/status 0.97
70 TestErrorSpam/pause 1.28
71 TestErrorSpam/unpause 1.35
72 TestErrorSpam/stop 11.07
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 62.45
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 38.23
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.12
84 TestFunctional/serial/CacheCmd/cache/add_local 0.77
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.39
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 35.54
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.1
95 TestFunctional/serial/LogsFileCmd 1.09
96 TestFunctional/serial/InvalidService 4.32
98 TestFunctional/parallel/ConfigCmd 0.5
100 TestFunctional/parallel/DryRun 0.39
101 TestFunctional/parallel/InternationalLanguage 0.17
102 TestFunctional/parallel/StatusCmd 0.97
106 TestFunctional/parallel/ServiceCmdConnect 8.68
107 TestFunctional/parallel/AddonsCmd 0.16
110 TestFunctional/parallel/SSHCmd 0.64
111 TestFunctional/parallel/CpCmd 1.93
113 TestFunctional/parallel/FileSync 0.27
114 TestFunctional/parallel/CertSync 1.65
118 TestFunctional/parallel/NodeLabels 0.08
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
122 TestFunctional/parallel/License 0.23
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.19
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/ServiceCmd/List 0.51
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
132 TestFunctional/parallel/ServiceCmd/Format 0.37
133 TestFunctional/parallel/ServiceCmd/URL 0.37
134 TestFunctional/parallel/Version/short 0.06
135 TestFunctional/parallel/Version/components 0.49
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
137 TestFunctional/parallel/ProfileCmd/profile_list 0.42
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
139 TestFunctional/parallel/MountCmd/any-port 7.85
140 TestFunctional/parallel/MountCmd/specific-port 1.62
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
146 TestFunctional/parallel/DockerEnv/bash 0.98
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
151 TestFunctional/parallel/ImageCommands/ImageBuild 2.62
152 TestFunctional/parallel/ImageCommands/Setup 0.41
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.89
154 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
155 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.92
156 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
157 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.55
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
163 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 57.87
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 39.95
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 2.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 0.67
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.29
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.31
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 40.63
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.01
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.02
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.17
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.51
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.39
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.17
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.95
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.19
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.65
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.92
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.28
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.62
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.33
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.29
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.22
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.22
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 2.81
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.19
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.15
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.49
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.92
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 9.24
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.09
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.34
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.44
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.59
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.36
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.41
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 6.57
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.71
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.55
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash 0.99
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.15
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 1.71
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 1.72
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
261 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
262 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
266 TestMultiControlPlane/serial/StartCluster 122.05
267 TestMultiControlPlane/serial/DeployApp 5.18
268 TestMultiControlPlane/serial/PingHostFromPods 1.27
269 TestMultiControlPlane/serial/AddWorkerNode 33.6
270 TestMultiControlPlane/serial/NodeLabels 0.07
271 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
272 TestMultiControlPlane/serial/CopyFile 17.6
273 TestMultiControlPlane/serial/StopSecondaryNode 11.6
274 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.72
275 TestMultiControlPlane/serial/RestartSecondaryNode 37.41
276 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
277 TestMultiControlPlane/serial/RestartClusterKeepsNodes 155.92
278 TestMultiControlPlane/serial/DeleteSecondaryNode 9.49
279 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
280 TestMultiControlPlane/serial/StopCluster 32.82
281 TestMultiControlPlane/serial/RestartCluster 81.05
282 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
283 TestMultiControlPlane/serial/AddSecondaryNode 41.14
284 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
287 TestImageBuild/serial/Setup 20.33
288 TestImageBuild/serial/NormalBuild 1.06
289 TestImageBuild/serial/BuildWithBuildArg 0.68
290 TestImageBuild/serial/BuildWithDockerIgnore 0.47
291 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.48
296 TestJSONOutput/start/Command 62.29
297 TestJSONOutput/start/Audit 0
299 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/pause/Command 0.5
303 TestJSONOutput/pause/Audit 0
305 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/unpause/Command 0.48
309 TestJSONOutput/unpause/Audit 0
311 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
314 TestJSONOutput/stop/Command 5.87
315 TestJSONOutput/stop/Audit 0
317 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
318 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
319 TestErrorJSONOutput 0.24
321 TestKicCustomNetwork/create_custom_network 26.98
322 TestKicCustomNetwork/use_default_bridge_network 21.78
323 TestKicExistingNetwork 25.9
324 TestKicCustomSubnet 26.77
325 TestKicStaticIP 25.42
326 TestMainNoArgs 0.06
327 TestMinikubeProfile 55.07
330 TestMountStart/serial/StartWithMountFirst 9.26
331 TestMountStart/serial/VerifyMountFirst 0.27
332 TestMountStart/serial/StartWithMountSecond 9.3
333 TestMountStart/serial/VerifyMountSecond 0.26
334 TestMountStart/serial/DeleteFirst 1.55
335 TestMountStart/serial/VerifyMountPostDelete 0.27
336 TestMountStart/serial/Stop 1.25
337 TestMountStart/serial/RestartStopped 8.11
338 TestMountStart/serial/VerifyMountPostStop 0.27
341 TestMultiNode/serial/FreshStart2Nodes 75.49
342 TestMultiNode/serial/DeployApp2Nodes 4.56
343 TestMultiNode/serial/PingHostFrom2Pods 0.86
344 TestMultiNode/serial/AddNode 30.47
345 TestMultiNode/serial/MultiNodeLabels 0.06
346 TestMultiNode/serial/ProfileList 0.65
347 TestMultiNode/serial/CopyFile 9.68
348 TestMultiNode/serial/StopNode 2.24
349 TestMultiNode/serial/StartAfterStop 9.52
350 TestMultiNode/serial/RestartKeepsNodes 72.43
351 TestMultiNode/serial/DeleteNode 5.28
352 TestMultiNode/serial/StopMultiNode 21.84
353 TestMultiNode/serial/RestartMultiNode 51.36
354 TestMultiNode/serial/ValidateNameConflict 24.92
359 TestPreload 128.65
361 TestScheduledStopUnix 96.86
362 TestSkaffold 75.58
364 TestInsufficientStorage 9.38
365 TestRunningBinaryUpgrade 333.03
367 TestKubernetesUpgrade 338.28
368 TestMissingContainerUpgrade 70.56
380 TestStoppedBinaryUpgrade/Setup 0.72
381 TestStoppedBinaryUpgrade/Upgrade 320.91
390 TestPause/serial/Start 36.76
391 TestPause/serial/SecondStartNoReconfiguration 40.11
392 TestPause/serial/Pause 0.51
393 TestPause/serial/VerifyStatus 0.33
394 TestPause/serial/Unpause 0.48
395 TestPause/serial/PauseAgain 0.56
396 TestPause/serial/DeletePaused 2.23
397 TestPause/serial/VerifyDeletedResources 30.04
399 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
400 TestNoKubernetes/serial/StartWithK8s 21.43
401 TestNoKubernetes/serial/StartWithStopK8s 15.85
402 TestNoKubernetes/serial/Start 8.57
403 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
404 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
405 TestNoKubernetes/serial/ProfileList 16.06
406 TestNoKubernetes/serial/Stop 1.28
407 TestNoKubernetes/serial/StartNoArgs 7.25
408 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
409 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
410 TestNetworkPlugins/group/auto/Start 73.49
411 TestNetworkPlugins/group/kindnet/Start 47.43
412 TestNetworkPlugins/group/calico/Start 57.6
413 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
414 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
415 TestNetworkPlugins/group/kindnet/NetCatPod 10.53
416 TestNetworkPlugins/group/kindnet/DNS 0.14
417 TestNetworkPlugins/group/kindnet/Localhost 0.12
418 TestNetworkPlugins/group/kindnet/HairPin 0.12
419 TestNetworkPlugins/group/auto/KubeletFlags 0.3
420 TestNetworkPlugins/group/auto/NetCatPod 10.19
421 TestNetworkPlugins/group/auto/DNS 0.16
422 TestNetworkPlugins/group/auto/Localhost 0.17
423 TestNetworkPlugins/group/auto/HairPin 0.13
424 TestNetworkPlugins/group/calico/ControllerPod 6.01
425 TestNetworkPlugins/group/custom-flannel/Start 38.93
426 TestNetworkPlugins/group/calico/KubeletFlags 0.32
427 TestNetworkPlugins/group/calico/NetCatPod 11.22
428 TestNetworkPlugins/group/calico/DNS 0.14
429 TestNetworkPlugins/group/calico/Localhost 0.13
430 TestNetworkPlugins/group/calico/HairPin 0.12
431 TestNetworkPlugins/group/false/Start 67.68
432 TestNetworkPlugins/group/enable-default-cni/Start 40.61
433 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
434 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
435 TestNetworkPlugins/group/custom-flannel/DNS 0.19
436 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
437 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
438 TestNetworkPlugins/group/flannel/Start 34.65
439 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
440 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
441 TestNetworkPlugins/group/false/KubeletFlags 0.31
442 TestNetworkPlugins/group/false/NetCatPod 9.19
443 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
444 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
445 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
446 TestNetworkPlugins/group/false/DNS 0.15
447 TestNetworkPlugins/group/false/Localhost 0.12
448 TestNetworkPlugins/group/false/HairPin 0.13
449 TestNetworkPlugins/group/flannel/ControllerPod 6.01
450 TestNetworkPlugins/group/bridge/Start 73.56
451 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
452 TestNetworkPlugins/group/flannel/NetCatPod 10.3
453 TestNetworkPlugins/group/kubenet/Start 69.43
455 TestStartStop/group/old-k8s-version/serial/FirstStart 43.22
456 TestNetworkPlugins/group/flannel/DNS 0.14
457 TestNetworkPlugins/group/flannel/Localhost 0.12
458 TestNetworkPlugins/group/flannel/HairPin 0.13
460 TestStartStop/group/no-preload/serial/FirstStart 66.06
461 TestStartStop/group/old-k8s-version/serial/DeployApp 10.32
462 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.99
463 TestStartStop/group/old-k8s-version/serial/Stop 11.02
464 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
465 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
466 TestStartStop/group/old-k8s-version/serial/SecondStart 46.23
467 TestNetworkPlugins/group/bridge/NetCatPod 8.21
468 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
469 TestNetworkPlugins/group/kubenet/NetCatPod 9.18
470 TestNetworkPlugins/group/bridge/DNS 0.15
471 TestNetworkPlugins/group/bridge/Localhost 0.12
472 TestNetworkPlugins/group/bridge/HairPin 0.12
473 TestNetworkPlugins/group/kubenet/DNS 0.18
474 TestNetworkPlugins/group/kubenet/Localhost 0.16
475 TestNetworkPlugins/group/kubenet/HairPin 0.15
477 TestStartStop/group/embed-certs/serial/FirstStart 71.82
478 TestStartStop/group/no-preload/serial/DeployApp 11.3
480 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.14
481 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.91
482 TestStartStop/group/no-preload/serial/Stop 11.09
483 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
484 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
485 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
486 TestStartStop/group/no-preload/serial/SecondStart 52.48
487 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
488 TestStartStop/group/old-k8s-version/serial/Pause 2.97
490 TestStartStop/group/newest-cni/serial/FirstStart 26.38
491 TestStartStop/group/newest-cni/serial/DeployApp 0
492 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.74
493 TestStartStop/group/newest-cni/serial/Stop 11
494 TestStartStop/group/embed-certs/serial/DeployApp 8.25
495 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
496 TestStartStop/group/newest-cni/serial/SecondStart 12.34
497 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
498 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
499 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.87
500 TestStartStop/group/embed-certs/serial/Stop 11.05
501 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
502 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.11
503 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
504 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
505 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
506 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
507 TestStartStop/group/newest-cni/serial/Pause 2.42
508 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
509 TestStartStop/group/no-preload/serial/Pause 2.48
510 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.29
511 TestStartStop/group/embed-certs/serial/SecondStart 46.95
512 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
513 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.66
514 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
515 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
516 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
517 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
518 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
519 TestStartStop/group/embed-certs/serial/Pause 2.54
520 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
521 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.64
x
+
TestDownloadOnly/v1.28.0/json-events (3.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-603297 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-603297 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.912439566s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (3.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 09:00:18.244620  558759 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1206 09:00:18.244723  558759 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-603297
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-603297: exit status 85 (77.205429ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-603297 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-603297 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:00:14
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:00:14.385653  558771 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:00:14.385899  558771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:14.385922  558771 out.go:374] Setting ErrFile to fd 2...
	I1206 09:00:14.385930  558771 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:14.386144  558771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	W1206 09:00:14.386279  558771 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22047-555179/.minikube/config/config.json: open /home/jenkins/minikube-integration/22047-555179/.minikube/config/config.json: no such file or directory
	I1206 09:00:14.386726  558771 out.go:368] Setting JSON to true
	I1206 09:00:14.387618  558771 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6161,"bootTime":1765005453,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:00:14.387674  558771 start.go:143] virtualization: kvm guest
	I1206 09:00:14.391004  558771 out.go:99] [download-only-603297] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1206 09:00:14.391153  558771 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 09:00:14.391218  558771 notify.go:221] Checking for updates...
	I1206 09:00:14.392317  558771 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:00:14.393551  558771 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:00:14.394719  558771 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:00:14.395947  558771 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:00:14.397099  558771 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:00:14.399033  558771 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:00:14.399306  558771 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:00:14.423347  558771 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:00:14.423428  558771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:14.479456  558771 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-06 09:00:14.469366678 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:14.479559  558771 docker.go:319] overlay module found
	I1206 09:00:14.481299  558771 out.go:99] Using the docker driver based on user configuration
	I1206 09:00:14.481329  558771 start.go:309] selected driver: docker
	I1206 09:00:14.481338  558771 start.go:927] validating driver "docker" against <nil>
	I1206 09:00:14.481431  558771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:14.535662  558771 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-06 09:00:14.526386225 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:14.535816  558771 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:00:14.536423  558771 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 09:00:14.536619  558771 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:00:14.538322  558771 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-603297 host does not exist
	  To start a cluster, run: "minikube start -p download-only-603297"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-603297
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (2.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-955357 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-955357 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (2.410096296s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (2.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 09:00:21.112499  558759 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1206 09:00:21.112536  558759 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-955357
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-955357: exit status 85 (74.398119ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-603297 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-603297 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ delete  │ -p download-only-603297                                                                                                                                                       │ download-only-603297 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ start   │ -o=json --download-only -p download-only-955357 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-955357 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:00:18
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:00:18.754743  559115 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:00:18.754852  559115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:18.754859  559115 out.go:374] Setting ErrFile to fd 2...
	I1206 09:00:18.754865  559115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:18.755071  559115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:00:18.755551  559115 out.go:368] Setting JSON to true
	I1206 09:00:18.756462  559115 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6166,"bootTime":1765005453,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:00:18.756521  559115 start.go:143] virtualization: kvm guest
	I1206 09:00:18.758382  559115 out.go:99] [download-only-955357] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:00:18.758543  559115 notify.go:221] Checking for updates...
	I1206 09:00:18.760262  559115 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:00:18.761749  559115 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:00:18.762856  559115 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:00:18.764075  559115 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:00:18.765118  559115 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:00:18.767298  559115 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:00:18.767615  559115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:00:18.792287  559115 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:00:18.792373  559115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:18.847469  559115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-06 09:00:18.837635572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:18.847573  559115 docker.go:319] overlay module found
	I1206 09:00:18.849236  559115 out.go:99] Using the docker driver based on user configuration
	I1206 09:00:18.849273  559115 start.go:309] selected driver: docker
	I1206 09:00:18.849281  559115 start.go:927] validating driver "docker" against <nil>
	I1206 09:00:18.849396  559115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:18.903301  559115 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-06 09:00:18.89390221 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:18.903518  559115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:00:18.904087  559115 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 09:00:18.904289  559115 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:00:18.906406  559115 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-955357 host does not exist
	  To start a cluster, run: "minikube start -p download-only-955357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-955357
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (2.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-716523 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-716523 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (2.758258214s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (2.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 09:00:24.315351  558759 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1206 09:00:24.315386  558759 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-555179/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-716523
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-716523: exit status 85 (73.24993ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-603297 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-603297 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ delete  │ -p download-only-603297                                                                                                                                                              │ download-only-603297 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ start   │ -o=json --download-only -p download-only-955357 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker  --container-runtime=docker        │ download-only-955357 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ delete  │ -p download-only-955357                                                                                                                                                              │ download-only-955357 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │ 06 Dec 25 09:00 UTC │
	│ start   │ -o=json --download-only -p download-only-716523 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-716523 │ jenkins │ v1.37.0 │ 06 Dec 25 09:00 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 09:00:21
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 09:00:21.610798  559476 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:00:21.611080  559476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:21.611089  559476 out.go:374] Setting ErrFile to fd 2...
	I1206 09:00:21.611094  559476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:00:21.611353  559476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:00:21.611873  559476 out.go:368] Setting JSON to true
	I1206 09:00:21.612812  559476 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6169,"bootTime":1765005453,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:00:21.612867  559476 start.go:143] virtualization: kvm guest
	I1206 09:00:21.614587  559476 out.go:99] [download-only-716523] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:00:21.614750  559476 notify.go:221] Checking for updates...
	I1206 09:00:21.616309  559476 out.go:171] MINIKUBE_LOCATION=22047
	I1206 09:00:21.617725  559476 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:00:21.619123  559476 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:00:21.620365  559476 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:00:21.621734  559476 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1206 09:00:21.623792  559476 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 09:00:21.624058  559476 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:00:21.646009  559476 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:00:21.646101  559476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:21.701334  559476 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-06 09:00:21.692002789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:21.701431  559476 docker.go:319] overlay module found
	I1206 09:00:21.703086  559476 out.go:99] Using the docker driver based on user configuration
	I1206 09:00:21.703116  559476 start.go:309] selected driver: docker
	I1206 09:00:21.703122  559476 start.go:927] validating driver "docker" against <nil>
	I1206 09:00:21.703205  559476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:00:21.757928  559476 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-06 09:00:21.748780582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:00:21.758109  559476 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 09:00:21.758596  559476 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1206 09:00:21.758726  559476 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 09:00:21.760665  559476 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-716523 host does not exist
	  To start a cluster, run: "minikube start -p download-only-716523"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-716523
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-129039 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-129039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-129039
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 09:00:25.609507  558759 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-001335 --alsologtostderr --binary-mirror http://127.0.0.1:37221 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-001335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-001335
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (81.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-412681 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-412681 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m19.236702736s)
helpers_test.go:175: Cleaning up "offline-docker-412681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-412681
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-412681: (2.362832793s)
--- PASS: TestOffline (81.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-397143
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-397143: exit status 85 (63.579388ms)

                                                
                                                
-- stdout --
	* Profile "addons-397143" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-397143"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-397143
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-397143: exit status 85 (64.227352ms)

                                                
                                                
-- stdout --
	* Profile "addons-397143" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-397143"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (93.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-397143 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-397143 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m33.605876203s)
--- PASS: TestAddons/Setup (93.61s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 15.743295ms
addons_test.go:884: volcano-controller stabilized in 15.801411ms
addons_test.go:868: volcano-scheduler stabilized in 15.838287ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-btlqh" [0e69029a-050a-4289-9e28-aa0aab3f310f] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003500342s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-28sgx" [48168b82-2b01-4c45-9a0b-1785aa90a187] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004162181s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-44t5h" [a6f8fe50-8d42-4088-b705-582dfd649cbd] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003767889s
addons_test.go:903: (dbg) Run:  kubectl --context addons-397143 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-397143 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-397143 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [16a8a6f2-bef0-4195-a957-8ac03356f04d] Pending
helpers_test.go:352: "test-job-nginx-0" [16a8a6f2-bef0-4195-a957-8ac03356f04d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [16a8a6f2-bef0-4195-a957-8ac03356f04d] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.002801648s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable volcano --alsologtostderr -v=1: (11.910371998s)
--- PASS: TestAddons/serial/Volcano (39.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-397143 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-397143 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-397143 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-397143 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5dbde59c-c425-4629-899a-ea5adba69c7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5dbde59c-c425-4629-899a-ea5adba69c7a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004110928s
addons_test.go:694: (dbg) Run:  kubectl --context addons-397143 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-397143 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-397143 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.90763ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zpjdp" [548aae88-07b1-44c4-be9a-0e70e03f5eb2] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003775467s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rxwrl" [29e4f265-043b-4c75-862c-a02beb7c6e1e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00410097s
addons_test.go:392: (dbg) Run:  kubectl --context addons-397143 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-397143 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-397143 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.65639349s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 ip
2025/12/06 09:03:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.42s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.166342ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-397143
addons_test.go:332: (dbg) Run:  kubectl --context addons-397143 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-sf24w" [f70de5d2-14c9-4f5a-959c-4eba1338ef6e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003577091s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable inspektor-gadget --alsologtostderr -v=1: (5.656201038s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.172926ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-kf4gp" [028fa0be-ce61-4a1a-88bc-1cb6d15e3e69] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003152783s
addons_test.go:463: (dbg) Run:  kubectl --context addons-397143 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.1s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 09:03:04.278391  558759 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 09:03:04.281627  558759 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 09:03:04.281656  558759 kapi.go:107] duration metric: took 3.291903ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.325618ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-397143 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-397143 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2bba6668-a6d0-4354-8c04-6b53d36fd97f] Pending
helpers_test.go:352: "task-pv-pod" [2bba6668-a6d0-4354-8c04-6b53d36fd97f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2bba6668-a6d0-4354-8c04-6b53d36fd97f] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00277628s
addons_test.go:572: (dbg) Run:  kubectl --context addons-397143 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-397143 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-397143 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-397143 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-397143 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-397143 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-397143 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-397143 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [ec6fe0ad-3eb8-4cb6-a8d3-de7e7d3038c2] Pending
helpers_test.go:352: "task-pv-pod-restore" [ec6fe0ad-3eb8-4cb6-a8d3-de7e7d3038c2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [ec6fe0ad-3eb8-4cb6-a8d3-de7e7d3038c2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003390442s
addons_test.go:614: (dbg) Run:  kubectl --context addons-397143 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-397143 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-397143 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.563393892s)
--- PASS: TestAddons/parallel/CSI (40.10s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-397143 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-s9rqr" [b7e5b41c-3860-4044-a06a-fd04e3d077ae] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-s9rqr" [b7e5b41c-3860-4044-a06a-fd04e3d077ae] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004152279s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable headlamp --alsologtostderr -v=1: (5.698395199s)
--- PASS: TestAddons/parallel/Headlamp (16.44s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-nhmlh" [cc598cb5-a848-482a-aab6-07719c0c9fe7] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003516313s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-znf8f" [17e7dbb3-481b-40e2-95e0-1b3aeb866481] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003428937s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-m9tcf" [9cb7a1ca-2320-4223-b587-0316c96ec8fb] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004411454s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-397143 addons disable yakd --alsologtostderr -v=1: (5.665214413s)
--- PASS: TestAddons/parallel/Yakd (11.67s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-7v6qd" [e7bf466a-43c8-41cb-9860-7d52e5aff252] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.00331724s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-397143 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-397143
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-397143: (10.969218849s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-397143
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-397143
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-397143
--- PASS: TestAddons/StoppedEnableDisable (11.28s)

                                                
                                    
x
+
TestCertOptions (27.89s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-475839 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E1206 10:08:53.084282  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-475839 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (24.657770034s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-475839 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-475839 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-475839 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-475839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-475839
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-475839: (2.484348381s)
--- PASS: TestCertOptions (27.89s)

                                                
                                    
x
+
TestCertExpiration (236.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-374315 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-374315 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (27.542081397s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-374315 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-374315 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (25.961178957s)
helpers_test.go:175: Cleaning up "cert-expiration-374315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-374315
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-374315: (2.928541016s)
--- PASS: TestCertExpiration (236.43s)

                                                
                                    
x
+
TestDockerFlags (38.9s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-467761 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-467761 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.585153309s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-467761 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-467761 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-467761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-467761
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-467761: (2.375490876s)
--- PASS: TestDockerFlags (38.90s)

                                                
                                    
x
+
TestForceSystemdFlag (29.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-099988 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-099988 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.190439581s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-099988 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-099988" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-099988
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-099988: (2.288106543s)
--- PASS: TestForceSystemdFlag (29.86s)

                                                
                                    
x
+
TestForceSystemdEnv (28.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-873517 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1206 10:07:25.078055  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.142032  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.148431  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.160031  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.181503  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.223310  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.305346  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.467618  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:31.789630  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:32.431996  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:33.714265  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:36.276493  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:07:41.398784  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-873517 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.948226452s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-873517 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-873517" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-873517
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-873517: (2.213962944s)
--- PASS: TestForceSystemdEnv (28.52s)

                                                
                                    
x
+
TestErrorSpam/setup (24.62s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-966465 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-966465 --driver=docker  --container-runtime=docker
E1206 09:12:00.097702  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:00.104147  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:00.115620  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:00.137093  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:00.179043  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:00.260533  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:00.422126  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:00.743973  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:01.385539  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:12:02.667212  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-966465 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-966465 --driver=docker  --container-runtime=docker: (24.616367656s)
--- PASS: TestErrorSpam/setup (24.62s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 start --dry-run
E1206 09:12:05.228590  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 pause
--- PASS: TestErrorSpam/pause (1.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 unpause
--- PASS: TestErrorSpam/unpause (1.35s)

                                                
                                    
x
+
TestErrorSpam/stop (11.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 stop
E1206 09:12:10.349969  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 stop: (10.852059199s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-966465 --log_dir /tmp/nospam-966465 stop
--- PASS: TestErrorSpam/stop (11.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-555179/.minikube/files/etc/test/nested/copy/558759/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (62.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059985 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E1206 09:12:41.073128  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:13:22.034692  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-059985 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m2.445290556s)
--- PASS: TestFunctional/serial/StartWithProxy (62.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 09:13:24.054469  558759 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059985 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-059985 --alsologtostderr -v=8: (38.230259207s)
functional_test.go:678: soft start took 38.231374837s for "functional-059985" cluster.
I1206 09:14:02.285575  558759 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (38.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-059985 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-059985 /tmp/TestFunctionalserialCacheCmdcacheadd_local3248593641/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cache add minikube-local-cache-test:functional-059985
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cache delete minikube-local-cache-test:functional-059985
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-059985
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (290.773865ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 kubectl -- --context functional-059985 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-059985 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.54s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059985 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-059985 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.539942439s)
functional_test.go:776: restart took 35.540065089s for "functional-059985" cluster.
I1206 09:14:43.028654  558759 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (35.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-059985 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 logs
E1206 09:14:43.956781  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-059985 logs: (1.09879626s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 logs --file /tmp/TestFunctionalserialLogsFileCmd2929917842/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-059985 logs --file /tmp/TestFunctionalserialLogsFileCmd2929917842/001/logs.txt: (1.093108657s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-059985 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-059985
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-059985: exit status 115 (358.928768ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32548 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-059985 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 config get cpus: exit status 14 (105.454683ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 config get cpus: exit status 14 (76.654468ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-059985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.368855ms)

                                                
                                                
-- stdout --
	* [functional-059985] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:15:22.600991  615036 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:15:22.601319  615036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.601333  615036 out.go:374] Setting ErrFile to fd 2...
	I1206 09:15:22.601339  615036 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.601539  615036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:15:22.602007  615036 out.go:368] Setting JSON to false
	I1206 09:15:22.603146  615036 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7070,"bootTime":1765005453,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:15:22.603211  615036 start.go:143] virtualization: kvm guest
	I1206 09:15:22.605272  615036 out.go:179] * [functional-059985] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:15:22.606383  615036 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:15:22.606403  615036 notify.go:221] Checking for updates...
	I1206 09:15:22.608353  615036 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:15:22.609589  615036 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:15:22.610673  615036 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:15:22.611657  615036 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:15:22.612557  615036 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:15:22.613813  615036 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:15:22.614379  615036 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:15:22.639124  615036 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:15:22.639214  615036 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:15:22.698146  615036 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:15:22.687735362 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:15:22.698269  615036 docker.go:319] overlay module found
	I1206 09:15:22.700702  615036 out.go:179] * Using the docker driver based on existing profile
	I1206 09:15:22.701928  615036 start.go:309] selected driver: docker
	I1206 09:15:22.701951  615036 start.go:927] validating driver "docker" against &{Name:functional-059985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-059985 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:15:22.702076  615036 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:15:22.703748  615036 out.go:203] 
	W1206 09:15:22.705002  615036 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:15:22.706183  615036 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059985 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-059985 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (173.029763ms)

                                                
                                                
-- stdout --
	* [functional-059985] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:15:22.996535  615253 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:15:22.996811  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.996821  615253 out.go:374] Setting ErrFile to fd 2...
	I1206 09:15:22.996825  615253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:15:22.997221  615253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:15:22.997704  615253 out.go:368] Setting JSON to false
	I1206 09:15:22.998815  615253 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7070,"bootTime":1765005453,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:15:22.998893  615253 start.go:143] virtualization: kvm guest
	I1206 09:15:23.000676  615253 out.go:179] * [functional-059985] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:15:23.002295  615253 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:15:23.002335  615253 notify.go:221] Checking for updates...
	I1206 09:15:23.004817  615253 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:15:23.006160  615253 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:15:23.007393  615253 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:15:23.008584  615253 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:15:23.009742  615253 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:15:23.011225  615253 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:15:23.011768  615253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:15:23.036100  615253 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:15:23.036221  615253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:15:23.094472  615253 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:15:23.08480577 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:15:23.094606  615253 docker.go:319] overlay module found
	I1206 09:15:23.096284  615253 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:15:23.097259  615253 start.go:309] selected driver: docker
	I1206 09:15:23.097272  615253 start.go:927] validating driver "docker" against &{Name:functional-059985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-059985 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:15:23.097371  615253 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:15:23.098949  615253 out.go:203] 
	W1206 09:15:23.100067  615253 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:15:23.101109  615253 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-059985 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-059985 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-grhpg" [a364a4b5-2cf3-4fbe-90b9-1e87f2bff3c5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-grhpg" [a364a4b5-2cf3-4fbe-90b9-1e87f2bff3c5] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003671703s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31495
functional_test.go:1680: http://192.168.49.2:31495: success! body:
Request served by hello-node-connect-7d85dfc575-grhpg

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31495
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh -n functional-059985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cp functional-059985:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd468134477/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh -n functional-059985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh -n functional-059985 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/558759/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo cat /etc/test/nested/copy/558759/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/558759.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo cat /etc/ssl/certs/558759.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/558759.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo cat /usr/share/ca-certificates/558759.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5587592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo cat /etc/ssl/certs/5587592.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5587592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo cat /usr/share/ca-certificates/5587592.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-059985 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 ssh "sudo systemctl is-active crio": exit status 1 (272.804941ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-059985 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-059985 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ld9pq" [55edd1e0-4e7e-4aa0-820e-510bcaae7960] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-ld9pq" [55edd1e0-4e7e-4aa0-820e-510bcaae7960] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003098702s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-059985 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-059985 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-059985 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 610060: os: process already finished
helpers_test.go:519: unable to terminate pid 609769: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-059985 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-059985 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 service list -o json
functional_test.go:1504: Took "504.993462ms" to run "out/minikube-linux-amd64 -p functional-059985 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30517
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30517
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "354.709887ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "64.565184ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "351.764056ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "74.692355ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdany-port2665761603/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765012511224140653" to /tmp/TestFunctionalparallelMountCmdany-port2665761603/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765012511224140653" to /tmp/TestFunctionalparallelMountCmdany-port2665761603/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765012511224140653" to /tmp/TestFunctionalparallelMountCmdany-port2665761603/001/test-1765012511224140653
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.532566ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:15:11.519110  558759 retry.go:31] will retry after 592.111895ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:15 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:15 test-1765012511224140653
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh cat /mount-9p/test-1765012511224140653
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-059985 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6ab6bfcc-cf62-4dad-a3a0-8b1ce78377bf] Pending
helpers_test.go:352: "busybox-mount" [6ab6bfcc-cf62-4dad-a3a0-8b1ce78377bf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6ab6bfcc-cf62-4dad-a3a0-8b1ce78377bf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6ab6bfcc-cf62-4dad-a3a0-8b1ce78377bf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003916972s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-059985 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdany-port2665761603/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdspecific-port2064617473/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (291.116327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:15:19.364466  558759 retry.go:31] will retry after 270.693491ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdspecific-port2064617473/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 ssh "sudo umount -f /mount-9p": exit status 1 (295.710073ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-059985 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdspecific-port2064617473/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T" /mount1: exit status 1 (354.494701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:15:21.052429  558759 retry.go:31] will retry after 599.177897ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-059985 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059985 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3406461228/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-059985 docker-env) && out/minikube-linux-amd64 status -p functional-059985"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-059985 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059985 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-059985
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-059985
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059985 image ls --format short --alsologtostderr:
I1206 09:20:33.967119  622273 out.go:360] Setting OutFile to fd 1 ...
I1206 09:20:33.967238  622273 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:33.967246  622273 out.go:374] Setting ErrFile to fd 2...
I1206 09:20:33.967250  622273 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:33.967444  622273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:20:33.967956  622273 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:33.968047  622273 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:33.968460  622273 cli_runner.go:164] Run: docker container inspect functional-059985 --format={{.State.Status}}
I1206 09:20:33.986614  622273 ssh_runner.go:195] Run: systemctl --version
I1206 09:20:33.986666  622273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059985
I1206 09:20:34.005819  622273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-059985/id_rsa Username:docker}
I1206 09:20:34.099112  622273 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059985 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ localhost/my-image                          │ functional-059985 │ a5ffa7fb8a1f8 │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-059985 │ 733830fb04f05 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kicbase/echo-server               │ functional-059985 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059985 image ls --format table --alsologtostderr:
I1206 09:20:37.239928  622792 out.go:360] Setting OutFile to fd 1 ...
I1206 09:20:37.240180  622792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:37.240188  622792 out.go:374] Setting ErrFile to fd 2...
I1206 09:20:37.240192  622792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:37.240401  622792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:20:37.240902  622792 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:37.241014  622792 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:37.241406  622792 cli_runner.go:164] Run: docker container inspect functional-059985 --format={{.State.Status}}
I1206 09:20:37.259227  622792 ssh_runner.go:195] Run: systemctl --version
I1206 09:20:37.259278  622792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059985
I1206 09:20:37.277778  622792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-059985/id_rsa Username:docker}
I1206 09:20:37.369601  622792 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059985 image ls --format json --alsologtostderr:
[{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"a5ffa7fb8a1f877720d85ae6205d45f8dd62e4fce1f49d9c02c65419cd294e56","repoDigests":[],"repoTags":["localhost/my-image:functional-059985"],"size":"1240000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899
ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-059985","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"re
poTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"733830fb04f057df569bb479cdc40b19b41b082c3f4b50e14e4a28da96b72962","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-059985"],"size":"30"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059985 image ls --format json --alsologtostderr:
I1206 09:20:37.023296  622737 out.go:360] Setting OutFile to fd 1 ...
I1206 09:20:37.023625  622737 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:37.023637  622737 out.go:374] Setting ErrFile to fd 2...
I1206 09:20:37.023644  622737 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:37.023861  622737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:20:37.024421  622737 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:37.024514  622737 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:37.024985  622737 cli_runner.go:164] Run: docker container inspect functional-059985 --format={{.State.Status}}
I1206 09:20:37.043229  622737 ssh_runner.go:195] Run: systemctl --version
I1206 09:20:37.043283  622737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059985
I1206 09:20:37.060860  622737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-059985/id_rsa Username:docker}
I1206 09:20:37.153481  622737 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059985 image ls --format yaml --alsologtostderr:
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-059985
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 733830fb04f057df569bb479cdc40b19b41b082c3f4b50e14e4a28da96b72962
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-059985
size: "30"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059985 image ls --format yaml --alsologtostderr:
I1206 09:20:34.185042  622325 out.go:360] Setting OutFile to fd 1 ...
I1206 09:20:34.185295  622325 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:34.185305  622325 out.go:374] Setting ErrFile to fd 2...
I1206 09:20:34.185309  622325 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:34.185543  622325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:20:34.186151  622325 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:34.186270  622325 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:34.186732  622325 cli_runner.go:164] Run: docker container inspect functional-059985 --format={{.State.Status}}
I1206 09:20:34.204369  622325 ssh_runner.go:195] Run: systemctl --version
I1206 09:20:34.204427  622325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059985
I1206 09:20:34.221726  622325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-059985/id_rsa Username:docker}
I1206 09:20:34.315729  622325 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059985 ssh pgrep buildkitd: exit status 1 (266.310335ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image build -t localhost/my-image:functional-059985 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-059985 image build -t localhost/my-image:functional-059985 testdata/build --alsologtostderr: (2.134419758s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059985 image build -t localhost/my-image:functional-059985 testdata/build --alsologtostderr:
I1206 09:20:34.667578  622487 out.go:360] Setting OutFile to fd 1 ...
I1206 09:20:34.667701  622487 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:34.667712  622487 out.go:374] Setting ErrFile to fd 2...
I1206 09:20:34.667718  622487 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:20:34.667963  622487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:20:34.668559  622487 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:34.669249  622487 config.go:182] Loaded profile config "functional-059985": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1206 09:20:34.669708  622487 cli_runner.go:164] Run: docker container inspect functional-059985 --format={{.State.Status}}
I1206 09:20:34.688403  622487 ssh_runner.go:195] Run: systemctl --version
I1206 09:20:34.688461  622487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-059985
I1206 09:20:34.707002  622487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-059985/id_rsa Username:docker}
I1206 09:20:34.798728  622487 build_images.go:162] Building image from path: /tmp/build.3414449496.tar
I1206 09:20:34.798799  622487 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:20:34.807256  622487 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3414449496.tar
I1206 09:20:34.811033  622487 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3414449496.tar: stat -c "%s %y" /var/lib/minikube/build/build.3414449496.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3414449496.tar': No such file or directory
I1206 09:20:34.811063  622487 ssh_runner.go:362] scp /tmp/build.3414449496.tar --> /var/lib/minikube/build/build.3414449496.tar (3072 bytes)
I1206 09:20:34.828805  622487 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3414449496
I1206 09:20:34.836583  622487 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3414449496 -xf /var/lib/minikube/build/build.3414449496.tar
I1206 09:20:34.844410  622487 docker.go:361] Building image: /var/lib/minikube/build/build.3414449496
I1206 09:20:34.844475  622487 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-059985 /var/lib/minikube/build/build.3414449496
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:a5ffa7fb8a1f877720d85ae6205d45f8dd62e4fce1f49d9c02c65419cd294e56 done
#8 naming to localhost/my-image:functional-059985 done
#8 DONE 0.0s
I1206 09:20:36.722845  622487 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-059985 /var/lib/minikube/build/build.3414449496: (1.878340743s)
I1206 09:20:36.722944  622487 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3414449496
I1206 09:20:36.731458  622487 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3414449496.tar
I1206 09:20:36.739185  622487 build_images.go:218] Built localhost/my-image:functional-059985 from /tmp/build.3414449496.tar
I1206 09:20:36.739219  622487 build_images.go:134] succeeded building to: functional-059985
I1206 09:20:36.739226  622487 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-059985
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-059985
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image load --daemon kicbase/echo-server:functional-059985 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image save kicbase/echo-server:functional-059985 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image rm kicbase/echo-server:functional-059985 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-059985
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-059985 image save --daemon kicbase/echo-server:functional-059985 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-059985
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-059985 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-059985
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-059985
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-059985
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-555179/.minikube/files/etc/test/nested/copy/558759/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (57.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326239 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-326239 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (57.871277979s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (57.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (39.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 09:25:52.300327  558759 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326239 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-326239 --alsologtostderr -v=8: (39.949314044s)
functional_test.go:678: soft start took 39.949818765s for "functional-326239" cluster.
I1206 09:26:32.250159  558759 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (39.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-326239 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (2.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2745373186/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cache add minikube-local-cache-test:functional-326239
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cache delete minikube-local-cache-test:functional-326239
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-326239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.153452ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 kubectl -- --context functional-326239 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-326239 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (40.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 09:27:00.097139  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-326239 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.627017134s)
functional_test.go:776: restart took 40.627153447s for "functional-326239" cluster.
I1206 09:27:17.827284  558759 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (40.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-326239 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-326239 logs: (1.012790993s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs131148191/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-326239 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs131148191/001/logs.txt: (1.020856101s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-326239 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-326239
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-326239: exit status 115 (349.450699ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32626 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-326239 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 config get cpus: exit status 14 (88.065781ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 config get cpus: exit status 14 (90.502027ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (170.676986ms)

                                                
                                                
-- stdout --
	* [functional-326239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:27:49.678877  648337 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:49.679182  648337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.679194  648337 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:49.679198  648337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:49.679399  648337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:27:49.679836  648337 out.go:368] Setting JSON to false
	I1206 09:27:49.680966  648337 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7817,"bootTime":1765005453,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:49.681025  648337 start.go:143] virtualization: kvm guest
	I1206 09:27:49.682766  648337 out.go:179] * [functional-326239] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:49.683980  648337 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:49.684023  648337 notify.go:221] Checking for updates...
	I1206 09:27:49.686352  648337 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:49.687555  648337 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:27:49.692422  648337 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:27:49.693741  648337 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:49.694890  648337 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:49.696523  648337 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:49.697109  648337 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:49.722386  648337 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:27:49.722499  648337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:49.779122  648337 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:49.769017423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:49.779269  648337 docker.go:319] overlay module found
	I1206 09:27:49.781629  648337 out.go:179] * Using the docker driver based on existing profile
	I1206 09:27:49.782892  648337 start.go:309] selected driver: docker
	I1206 09:27:49.782906  648337 start.go:927] validating driver "docker" against &{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:49.783047  648337 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:49.784727  648337 out.go:203] 
	W1206 09:27:49.786033  648337 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 09:27:49.787177  648337 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326239 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-326239 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (169.953852ms)

                                                
                                                
-- stdout --
	* [functional-326239] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:27:48.522015  647865 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:27:48.522310  647865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:48.522321  647865 out.go:374] Setting ErrFile to fd 2...
	I1206 09:27:48.522325  647865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:27:48.522630  647865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:27:48.523087  647865 out.go:368] Setting JSON to false
	I1206 09:27:48.524149  647865 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":7816,"bootTime":1765005453,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1206 09:27:48.524206  647865 start.go:143] virtualization: kvm guest
	I1206 09:27:48.526177  647865 out.go:179] * [functional-326239] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1206 09:27:48.527598  647865 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 09:27:48.527609  647865 notify.go:221] Checking for updates...
	I1206 09:27:48.530130  647865 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 09:27:48.531292  647865 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	I1206 09:27:48.532365  647865 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	I1206 09:27:48.533557  647865 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1206 09:27:48.534786  647865 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 09:27:48.536252  647865 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1206 09:27:48.536870  647865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 09:27:48.560285  647865 docker.go:124] docker version: linux-29.1.2:Docker Engine - Community
	I1206 09:27:48.560382  647865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:27:48.620675  647865 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:1 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-06 09:27:48.610695895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:27:48.620784  647865 docker.go:319] overlay module found
	I1206 09:27:48.623240  647865 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 09:27:48.624736  647865 start.go:309] selected driver: docker
	I1206 09:27:48.624756  647865 start.go:927] validating driver "docker" against &{Name:functional-326239 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-326239 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 09:27:48.624845  647865 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 09:27:48.626773  647865 out.go:203] 
	W1206 09:27:48.627938  647865 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 09:27:48.629139  647865 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh -n functional-326239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cp functional-326239:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2383907754/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh -n functional-326239 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh -n functional-326239 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/558759/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo cat /etc/test/nested/copy/558759/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/558759.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo cat /etc/ssl/certs/558759.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/558759.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo cat /usr/share/ca-certificates/558759.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5587592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo cat /etc/ssl/certs/5587592.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5587592.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo cat /usr/share/ca-certificates/5587592.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-326239 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 ssh "sudo systemctl is-active crio": exit status 1 (327.895708ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326239 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-326239
docker.io/kicbase/echo-server:functional-326239
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326239 image ls --format short --alsologtostderr:
I1206 09:33:34.379454  653337 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:34.379702  653337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:34.379710  653337 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:34.379718  653337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:34.379951  653337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:33:34.380552  653337 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:34.380644  653337 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:34.381456  653337 cli_runner.go:164] Run: docker container inspect functional-326239 --format={{.State.Status}}
I1206 09:33:34.399996  653337 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:34.400045  653337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326239
I1206 09:33:34.417806  653337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-326239/id_rsa Username:docker}
I1206 09:33:34.509827  653337 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326239 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ docker.io/library/minikube-local-cache-test │ functional-326239 │ 733830fb04f05 │ 30B    │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ docker.io/kicbase/echo-server               │ functional-326239 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ localhost/my-image                          │ functional-326239 │ 8e0988533bbfa │ 1.24MB │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ docker.io/library/nginx                     │ alpine            │ d4918ca78576a │ 52.8MB │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326239 image ls --format table --alsologtostderr:
I1206 09:33:37.845126  653874 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:37.845249  653874 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:37.845262  653874 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:37.845267  653874 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:37.845515  653874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:33:37.846155  653874 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:37.846297  653874 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:37.846790  653874 cli_runner.go:164] Run: docker container inspect functional-326239 --format={{.State.Status}}
I1206 09:33:37.864991  653874 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:37.865040  653874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326239
I1206 09:33:37.881799  653874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-326239/id_rsa Username:docker}
I1206 09:33:37.974590  653874 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326239 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-326239"],"size":"4940000"},{"id":"733830fb04f057df569
bb479cdc40b19b41b082c3f4b50e14e4a28da96b72962","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-326239"],"size":"30"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"8e0988533bbfa0a1aa99bcb74c88c369b40b6e0c9797db4fd7370e3306b18af1","repoDigests":[],"repoTags":["localhost/my-image:functional-326239"],"size":"1240000"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1
cabed9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52800000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326239 image ls --format json --alsologtostderr:
I1206 09:33:37.625432  653804 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:37.625549  653804 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:37.625558  653804 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:37.625562  653804 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:37.625778  653804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:33:37.626357  653804 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:37.626448  653804 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:37.626874  653804 cli_runner.go:164] Run: docker container inspect functional-326239 --format={{.State.Status}}
I1206 09:33:37.645746  653804 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:37.645796  653804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326239
I1206 09:33:37.664777  653804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-326239/id_rsa Username:docker}
I1206 09:33:37.756713  653804 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-326239 image ls --format yaml --alsologtostderr:
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-326239
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 733830fb04f057df569bb479cdc40b19b41b082c3f4b50e14e4a28da96b72962
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-326239
size: "30"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326239 image ls --format yaml --alsologtostderr:
I1206 09:33:34.597981  653390 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:34.598210  653390 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:34.598218  653390 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:34.598222  653390 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:34.598401  653390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:33:34.598979  653390 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:34.599076  653390 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:34.599497  653390 cli_runner.go:164] Run: docker container inspect functional-326239 --format={{.State.Status}}
I1206 09:33:34.617538  653390 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:34.617595  653390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326239
I1206 09:33:34.635092  653390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-326239/id_rsa Username:docker}
I1206 09:33:34.726568  653390 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 ssh pgrep buildkitd: exit status 1 (261.777473ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image build -t localhost/my-image:functional-326239 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-326239 image build -t localhost/my-image:functional-326239 testdata/build --alsologtostderr: (2.325564699s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-326239 image build -t localhost/my-image:functional-326239 testdata/build --alsologtostderr:
I1206 09:33:35.076304  653549 out.go:360] Setting OutFile to fd 1 ...
I1206 09:33:35.076427  653549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:35.076437  653549 out.go:374] Setting ErrFile to fd 2...
I1206 09:33:35.076444  653549 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 09:33:35.077126  653549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
I1206 09:33:35.078562  653549 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:35.079200  653549 config.go:182] Loaded profile config "functional-326239": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1206 09:33:35.079621  653549 cli_runner.go:164] Run: docker container inspect functional-326239 --format={{.State.Status}}
I1206 09:33:35.098763  653549 ssh_runner.go:195] Run: systemctl --version
I1206 09:33:35.098829  653549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-326239
I1206 09:33:35.117544  653549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/functional-326239/id_rsa Username:docker}
I1206 09:33:35.209501  653549 build_images.go:162] Building image from path: /tmp/build.2169404634.tar
I1206 09:33:35.209571  653549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 09:33:35.218295  653549 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2169404634.tar
I1206 09:33:35.222041  653549 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2169404634.tar: stat -c "%s %y" /var/lib/minikube/build/build.2169404634.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2169404634.tar': No such file or directory
I1206 09:33:35.222084  653549 ssh_runner.go:362] scp /tmp/build.2169404634.tar --> /var/lib/minikube/build/build.2169404634.tar (3072 bytes)
I1206 09:33:35.239540  653549 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2169404634
I1206 09:33:35.246930  653549 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2169404634 -xf /var/lib/minikube/build/build.2169404634.tar
I1206 09:33:35.254977  653549 docker.go:361] Building image: /var/lib/minikube/build/build.2169404634
I1206 09:33:35.255051  653549 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-326239 /var/lib/minikube/build/build.2169404634
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8e0988533bbfa0a1aa99bcb74c88c369b40b6e0c9797db4fd7370e3306b18af1 done
#8 naming to localhost/my-image:functional-326239 done
#8 DONE 0.0s
I1206 09:33:37.320067  653549 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-326239 /var/lib/minikube/build/build.2169404634: (2.064987512s)
I1206 09:33:37.320176  653549 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2169404634
I1206 09:33:37.328554  653549 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2169404634.tar
I1206 09:33:37.336557  653549 build_images.go:218] Built localhost/my-image:functional-326239 from /tmp/build.2169404634.tar
I1206 09:33:37.336588  653549 build_images.go:134] succeeded building to: functional-326239
I1206 09:33:37.336593  653549 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-326239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image load --daemon kicbase/echo-server:functional-326239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-326239 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-326239 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-326239 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 642145: os: process already finished
helpers_test.go:525: unable to kill pid 641781: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-326239 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image load --daemon kicbase/echo-server:functional-326239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-326239 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-326239 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [bc485205-e30f-4a1e-b5dc-08f90bbb493f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [bc485205-e30f-4a1e-b5dc-08f90bbb493f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003636165s
I1206 09:27:34.694059  558759 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (9.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-326239
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image load --daemon kicbase/echo-server:functional-326239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image save kicbase/echo-server:functional-326239 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image rm kicbase/echo-server:functional-326239 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-326239
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 image save --daemon kicbase/echo-server:functional-326239 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-326239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-326239 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.95.210 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-326239 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "330.048195ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.387164ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "330.09338ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.690529ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo185964982/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765013256058412170" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo185964982/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765013256058412170" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo185964982/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765013256058412170" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo185964982/001/test-1765013256058412170
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.294916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:27:36.341068  558759 retry.go:31] will retry after 326.935927ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 09:27 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 09:27 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 09:27 test-1765013256058412170
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh cat /mount-9p/test-1765013256058412170
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-326239 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [2a52ddca-637a-4932-a22f-cec2d38c1df0] Pending
helpers_test.go:352: "busybox-mount" [2a52ddca-637a-4932-a22f-cec2d38c1df0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [2a52ddca-637a-4932-a22f-cec2d38c1df0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [2a52ddca-637a-4932-a22f-cec2d38c1df0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003491948s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-326239 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo185964982/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (6.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1490801804/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.266222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:27:42.908651  558759 retry.go:31] will retry after 392.302183ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1490801804/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 ssh "sudo umount -f /mount-9p": exit status 1 (270.950043ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-326239 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1490801804/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T" /mount1: exit status 1 (341.097178ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 09:27:44.680396  558759 retry.go:31] will retry after 343.334883ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-326239 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-326239 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo492193237/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-326239 docker-env) && out/minikube-linux-amd64 status -p functional-326239"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-326239 docker-env) && docker images"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/bash (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 update-context --alsologtostderr -v=2
E1206 09:34:49.803555  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:35:17.504513  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:37:00.097442  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-326239 service list: (1.712501299s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-326239 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-326239 service list -o json: (1.719144742s)
functional_test.go:1504: Took "1.719247203s" to run "out/minikube-linux-amd64 -p functional-326239 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-326239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-326239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-326239
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (122.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1206 09:39:49.803238  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m1.333382293s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (122.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 kubectl -- rollout status deployment/busybox: (3.004707858s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-96rgc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-c688p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-rkrpg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-96rgc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-c688p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-rkrpg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-96rgc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-c688p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-rkrpg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-96rgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-96rgc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-c688p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-c688p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-rkrpg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 kubectl -- exec busybox-7b57f96db7-rkrpg -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (33.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 node add --alsologtostderr -v 5: (32.725310426s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (33.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-447220 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp testdata/cp-test.txt ha-447220:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile867386770/001/cp-test_ha-447220.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220:/home/docker/cp-test.txt ha-447220-m02:/home/docker/cp-test_ha-447220_ha-447220-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test_ha-447220_ha-447220-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220:/home/docker/cp-test.txt ha-447220-m03:/home/docker/cp-test_ha-447220_ha-447220-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test_ha-447220_ha-447220-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220:/home/docker/cp-test.txt ha-447220-m04:/home/docker/cp-test_ha-447220_ha-447220-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test_ha-447220_ha-447220-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp testdata/cp-test.txt ha-447220-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile867386770/001/cp-test_ha-447220-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m02:/home/docker/cp-test.txt ha-447220:/home/docker/cp-test_ha-447220-m02_ha-447220.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test_ha-447220-m02_ha-447220.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m02:/home/docker/cp-test.txt ha-447220-m03:/home/docker/cp-test_ha-447220-m02_ha-447220-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test_ha-447220-m02_ha-447220-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m02:/home/docker/cp-test.txt ha-447220-m04:/home/docker/cp-test_ha-447220-m02_ha-447220-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test_ha-447220-m02_ha-447220-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp testdata/cp-test.txt ha-447220-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile867386770/001/cp-test_ha-447220-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m03:/home/docker/cp-test.txt ha-447220:/home/docker/cp-test_ha-447220-m03_ha-447220.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test_ha-447220-m03_ha-447220.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m03:/home/docker/cp-test.txt ha-447220-m02:/home/docker/cp-test_ha-447220-m03_ha-447220-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test_ha-447220-m03_ha-447220-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m03:/home/docker/cp-test.txt ha-447220-m04:/home/docker/cp-test_ha-447220-m03_ha-447220-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test_ha-447220-m03_ha-447220-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp testdata/cp-test.txt ha-447220-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile867386770/001/cp-test_ha-447220-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m04:/home/docker/cp-test.txt ha-447220:/home/docker/cp-test_ha-447220-m04_ha-447220.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220 "sudo cat /home/docker/cp-test_ha-447220-m04_ha-447220.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m04:/home/docker/cp-test.txt ha-447220-m02:/home/docker/cp-test_ha-447220-m04_ha-447220-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m02 "sudo cat /home/docker/cp-test_ha-447220-m04_ha-447220-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 cp ha-447220-m04:/home/docker/cp-test.txt ha-447220-m03:/home/docker/cp-test_ha-447220-m04_ha-447220-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 ssh -n ha-447220-m03 "sudo cat /home/docker/cp-test_ha-447220-m04_ha-447220-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 node stop m02 --alsologtostderr -v 5: (10.904992815s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5: exit status 7 (696.607765ms)

                                                
                                                
-- stdout --
	ha-447220
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-447220-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-447220-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-447220-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:41:09.768311  684867 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:41:09.768614  684867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:41:09.768623  684867 out.go:374] Setting ErrFile to fd 2...
	I1206 09:41:09.768628  684867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:41:09.768863  684867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:41:09.769100  684867 out.go:368] Setting JSON to false
	I1206 09:41:09.769127  684867 mustload.go:66] Loading cluster: ha-447220
	I1206 09:41:09.769211  684867 notify.go:221] Checking for updates...
	I1206 09:41:09.769474  684867 config.go:182] Loaded profile config "ha-447220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:41:09.769489  684867 status.go:174] checking status of ha-447220 ...
	I1206 09:41:09.769961  684867 cli_runner.go:164] Run: docker container inspect ha-447220 --format={{.State.Status}}
	I1206 09:41:09.791638  684867 status.go:371] ha-447220 host status = "Running" (err=<nil>)
	I1206 09:41:09.791668  684867 host.go:66] Checking if "ha-447220" exists ...
	I1206 09:41:09.791923  684867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-447220
	I1206 09:41:09.809468  684867 host.go:66] Checking if "ha-447220" exists ...
	I1206 09:41:09.809771  684867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:41:09.809814  684867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-447220
	I1206 09:41:09.828026  684867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/ha-447220/id_rsa Username:docker}
	I1206 09:41:09.920285  684867 ssh_runner.go:195] Run: systemctl --version
	I1206 09:41:09.926558  684867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:41:09.939098  684867 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:41:09.996725  684867 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:6 ContainersRunning:3 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-06 09:41:09.985341768 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:41:09.997317  684867 kubeconfig.go:125] found "ha-447220" server: "https://192.168.49.254:8443"
	I1206 09:41:09.997348  684867 api_server.go:166] Checking apiserver status ...
	I1206 09:41:09.997391  684867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:10.010480  684867 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2200/cgroup
	W1206 09:41:10.019004  684867 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2200/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:41:10.019067  684867 ssh_runner.go:195] Run: ls
	I1206 09:41:10.023016  684867 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 09:41:10.027244  684867 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 09:41:10.027269  684867 status.go:463] ha-447220 apiserver status = Running (err=<nil>)
	I1206 09:41:10.027281  684867 status.go:176] ha-447220 status: &{Name:ha-447220 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:41:10.027302  684867 status.go:174] checking status of ha-447220-m02 ...
	I1206 09:41:10.027575  684867 cli_runner.go:164] Run: docker container inspect ha-447220-m02 --format={{.State.Status}}
	I1206 09:41:10.046010  684867 status.go:371] ha-447220-m02 host status = "Stopped" (err=<nil>)
	I1206 09:41:10.046037  684867 status.go:384] host is not running, skipping remaining checks
	I1206 09:41:10.046046  684867 status.go:176] ha-447220-m02 status: &{Name:ha-447220-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:41:10.046076  684867 status.go:174] checking status of ha-447220-m03 ...
	I1206 09:41:10.046437  684867 cli_runner.go:164] Run: docker container inspect ha-447220-m03 --format={{.State.Status}}
	I1206 09:41:10.066804  684867 status.go:371] ha-447220-m03 host status = "Running" (err=<nil>)
	I1206 09:41:10.066834  684867 host.go:66] Checking if "ha-447220-m03" exists ...
	I1206 09:41:10.067232  684867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-447220-m03
	I1206 09:41:10.085517  684867 host.go:66] Checking if "ha-447220-m03" exists ...
	I1206 09:41:10.085824  684867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:41:10.085868  684867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-447220-m03
	I1206 09:41:10.104320  684867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33201 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/ha-447220-m03/id_rsa Username:docker}
	I1206 09:41:10.195339  684867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:41:10.208073  684867 kubeconfig.go:125] found "ha-447220" server: "https://192.168.49.254:8443"
	I1206 09:41:10.208100  684867 api_server.go:166] Checking apiserver status ...
	I1206 09:41:10.208135  684867 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:41:10.219816  684867 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2125/cgroup
	W1206 09:41:10.228131  684867 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2125/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:41:10.228199  684867 ssh_runner.go:195] Run: ls
	I1206 09:41:10.232420  684867 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 09:41:10.236584  684867 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 09:41:10.236607  684867 status.go:463] ha-447220-m03 apiserver status = Running (err=<nil>)
	I1206 09:41:10.236616  684867 status.go:176] ha-447220-m03 status: &{Name:ha-447220-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:41:10.236631  684867 status.go:174] checking status of ha-447220-m04 ...
	I1206 09:41:10.236878  684867 cli_runner.go:164] Run: docker container inspect ha-447220-m04 --format={{.State.Status}}
	I1206 09:41:10.255064  684867 status.go:371] ha-447220-m04 host status = "Running" (err=<nil>)
	I1206 09:41:10.255086  684867 host.go:66] Checking if "ha-447220-m04" exists ...
	I1206 09:41:10.255319  684867 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-447220-m04
	I1206 09:41:10.273501  684867 host.go:66] Checking if "ha-447220-m04" exists ...
	I1206 09:41:10.273752  684867 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:41:10.273794  684867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-447220-m04
	I1206 09:41:10.293466  684867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/ha-447220-m04/id_rsa Username:docker}
	I1206 09:41:10.384626  684867 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:41:10.397813  684867 status.go:176] ha-447220-m04 status: &{Name:ha-447220-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 node start m02 --alsologtostderr -v 5: (36.332508351s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (155.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 stop --alsologtostderr -v 5
E1206 09:42:00.097380  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 stop --alsologtostderr -v 5: (33.890115129s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 start --wait true --alsologtostderr -v 5
E1206 09:42:25.077833  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:25.084273  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:25.095671  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:25.117097  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:25.158548  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:25.240048  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:25.401652  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:25.723401  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:26.365508  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:27.647580  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:30.209329  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:35.331182  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:42:45.573271  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:06.055059  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:43:47.016892  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 start --wait true --alsologtostderr -v 5: (2m1.898733395s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (155.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 node delete m03 --alsologtostderr -v 5: (8.686304522s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 stop --alsologtostderr -v 5
E1206 09:44:49.803131  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:45:03.164832  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 stop --alsologtostderr -v 5: (32.701752829s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5: exit status 7 (120.128239ms)

                                                
                                                
-- stdout --
	ha-447220
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-447220-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-447220-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:45:08.386840  714947 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:45:08.387147  714947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:45:08.387159  714947 out.go:374] Setting ErrFile to fd 2...
	I1206 09:45:08.387167  714947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:45:08.387425  714947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:45:08.387628  714947 out.go:368] Setting JSON to false
	I1206 09:45:08.387663  714947 mustload.go:66] Loading cluster: ha-447220
	I1206 09:45:08.387779  714947 notify.go:221] Checking for updates...
	I1206 09:45:08.388084  714947 config.go:182] Loaded profile config "ha-447220": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:45:08.388106  714947 status.go:174] checking status of ha-447220 ...
	I1206 09:45:08.388649  714947 cli_runner.go:164] Run: docker container inspect ha-447220 --format={{.State.Status}}
	I1206 09:45:08.408157  714947 status.go:371] ha-447220 host status = "Stopped" (err=<nil>)
	I1206 09:45:08.408181  714947 status.go:384] host is not running, skipping remaining checks
	I1206 09:45:08.408193  714947 status.go:176] ha-447220 status: &{Name:ha-447220 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:45:08.408239  714947 status.go:174] checking status of ha-447220-m02 ...
	I1206 09:45:08.408582  714947 cli_runner.go:164] Run: docker container inspect ha-447220-m02 --format={{.State.Status}}
	I1206 09:45:08.426387  714947 status.go:371] ha-447220-m02 host status = "Stopped" (err=<nil>)
	I1206 09:45:08.426417  714947 status.go:384] host is not running, skipping remaining checks
	I1206 09:45:08.426429  714947 status.go:176] ha-447220-m02 status: &{Name:ha-447220-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:45:08.426455  714947 status.go:174] checking status of ha-447220-m04 ...
	I1206 09:45:08.426717  714947 cli_runner.go:164] Run: docker container inspect ha-447220-m04 --format={{.State.Status}}
	I1206 09:45:08.443176  714947 status.go:371] ha-447220-m04 host status = "Stopped" (err=<nil>)
	I1206 09:45:08.443220  714947 status.go:384] host is not running, skipping remaining checks
	I1206 09:45:08.443236  714947 status.go:176] ha-447220-m04 status: &{Name:ha-447220-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1206 09:45:08.938598  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 09:46:12.866742  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m20.259727718s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 node add --control-plane --alsologtostderr -v 5
E1206 09:47:00.097347  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-447220 node add --control-plane --alsologtostderr -v 5: (40.242696439s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-447220 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-677026 --driver=docker  --container-runtime=docker
E1206 09:47:25.081122  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-677026 --driver=docker  --container-runtime=docker: (20.332405903s)
--- PASS: TestImageBuild/serial/Setup (20.33s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-677026
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-677026: (1.061127041s)
--- PASS: TestImageBuild/serial/NormalBuild (1.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-677026
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-677026
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-677026
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.48s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-169961 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1206 09:47:52.780063  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-169961 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m2.291353577s)
--- PASS: TestJSONOutput/start/Command (62.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-169961 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-169961 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-169961 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-169961 --output=json --user=testUser: (5.865970488s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-751872 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-751872 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (77.082232ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7f76f4a-1430-4ded-baa9-f36b85d63bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-751872] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"862dbe81-6204-4da6-a720-72512d9af1af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"3683f188-1d7e-488f-862f-04f9e2eb67be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ac04c640-ae40-409f-a483-16d63bd1fc29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig"}}
	{"specversion":"1.0","id":"a95de152-96d0-4030-b9f0-1177191a8f57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube"}}
	{"specversion":"1.0","id":"d9d7a0a8-8cc6-42f4-b657-3c8fa0c1cb6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"44a7c4cb-e9f7-4e40-8698-7385c925a97c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b493e65c-7188-4a4f-847d-f75ea6abf6c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-751872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-751872
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-665792 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-665792 --network=: (24.833138431s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-665792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-665792
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-665792: (2.127855345s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.78s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-858711 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-858711 --network=bridge: (19.750637338s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-858711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-858711
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-858711: (2.009854488s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.78s)

                                                
                                    
x
+
TestKicExistingNetwork (25.9s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1206 09:49:45.410708  558759 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 09:49:45.428556  558759 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 09:49:45.428639  558759 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1206 09:49:45.428677  558759 cli_runner.go:164] Run: docker network inspect existing-network
W1206 09:49:45.445336  558759 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1206 09:49:45.445370  558759 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1206 09:49:45.445389  558759 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1206 09:49:45.445513  558759 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 09:49:45.462579  558759 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6115e7b36dd1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:8a:f1:22:e9:3c:08} reservation:<nil>}
I1206 09:49:45.462991  558759 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cc3a40}
I1206 09:49:45.463019  558759 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1206 09:49:45.463074  558759 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1206 09:49:45.509692  558759 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-719175 --network=existing-network
E1206 09:49:49.805098  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-719175 --network=existing-network: (23.74741874s)
helpers_test.go:175: Cleaning up "existing-network-719175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-719175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-719175: (2.018355919s)
I1206 09:50:11.293159  558759 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.90s)

                                                
                                    
x
+
TestKicCustomSubnet (26.77s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-978521 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-978521 --subnet=192.168.60.0/24: (24.593956285s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-978521 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-978521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-978521
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-978521: (2.156352584s)
--- PASS: TestKicCustomSubnet (26.77s)

                                                
                                    
x
+
TestKicStaticIP (25.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-979425 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-979425 --static-ip=192.168.200.200: (23.111410171s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-979425 ip
helpers_test.go:175: Cleaning up "static-ip-979425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-979425
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-979425: (2.159451751s)
--- PASS: TestKicStaticIP (25.42s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (55.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-488959 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-488959 --driver=docker  --container-runtime=docker: (23.647407834s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-491513 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-491513 --driver=docker  --container-runtime=docker: (25.822668998s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-488959
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-491513
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-491513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-491513
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-491513: (2.171624617s)
helpers_test.go:175: Cleaning up "first-488959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-488959
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-488959: (2.195178958s)
--- PASS: TestMinikubeProfile (55.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.26s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-879151 --memory=3072 --mount-string /tmp/TestMountStartserial1857412360/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E1206 09:52:00.097533  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-879151 --memory=3072 --mount-string /tmp/TestMountStartserial1857412360/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.263720034s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-879151 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-897451 --memory=3072 --mount-string /tmp/TestMountStartserial1857412360/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-897451 --memory=3072 --mount-string /tmp/TestMountStartserial1857412360/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.298039305s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897451 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-879151 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-879151 --alsologtostderr -v=5: (1.547712114s)
--- PASS: TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897451 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-897451
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-897451: (1.247103717s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.11s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-897451
E1206 09:52:25.081414  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-897451: (7.107244037s)
--- PASS: TestMountStart/serial/RestartStopped (8.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-897451 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081526 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081526 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m15.004483078s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-081526 -- rollout status deployment/busybox: (3.00863204s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-hlxn4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-nkg7n -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-hlxn4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-nkg7n -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-hlxn4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-nkg7n -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-hlxn4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-hlxn4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-nkg7n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-081526 -- exec busybox-7b57f96db7-nkg7n -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-081526 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-081526 -v=5 --alsologtostderr: (29.828124204s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-081526 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp testdata/cp-test.txt multinode-081526:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1642743490/001/cp-test_multinode-081526.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526:/home/docker/cp-test.txt multinode-081526-m02:/home/docker/cp-test_multinode-081526_multinode-081526-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m02 "sudo cat /home/docker/cp-test_multinode-081526_multinode-081526-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526:/home/docker/cp-test.txt multinode-081526-m03:/home/docker/cp-test_multinode-081526_multinode-081526-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m03 "sudo cat /home/docker/cp-test_multinode-081526_multinode-081526-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp testdata/cp-test.txt multinode-081526-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1642743490/001/cp-test_multinode-081526-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526-m02:/home/docker/cp-test.txt multinode-081526:/home/docker/cp-test_multinode-081526-m02_multinode-081526.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526 "sudo cat /home/docker/cp-test_multinode-081526-m02_multinode-081526.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526-m02:/home/docker/cp-test.txt multinode-081526-m03:/home/docker/cp-test_multinode-081526-m02_multinode-081526-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m03 "sudo cat /home/docker/cp-test_multinode-081526-m02_multinode-081526-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp testdata/cp-test.txt multinode-081526-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1642743490/001/cp-test_multinode-081526-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526-m03:/home/docker/cp-test.txt multinode-081526:/home/docker/cp-test_multinode-081526-m03_multinode-081526.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526 "sudo cat /home/docker/cp-test_multinode-081526-m03_multinode-081526.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 cp multinode-081526-m03:/home/docker/cp-test.txt multinode-081526-m02:/home/docker/cp-test_multinode-081526-m03_multinode-081526-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 ssh -n multinode-081526-m02 "sudo cat /home/docker/cp-test_multinode-081526-m03_multinode-081526-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-081526 node stop m03: (1.267619263s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081526 status: exit status 7 (480.720927ms)

                                                
                                                
-- stdout --
	multinode-081526
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-081526-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-081526-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr: exit status 7 (493.283243ms)

                                                
                                                
-- stdout --
	multinode-081526
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-081526-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-081526-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:54:34.456792  797604 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:54:34.456889  797604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:54:34.456895  797604 out.go:374] Setting ErrFile to fd 2...
	I1206 09:54:34.456904  797604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:54:34.457153  797604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:54:34.457338  797604 out.go:368] Setting JSON to false
	I1206 09:54:34.457366  797604 mustload.go:66] Loading cluster: multinode-081526
	I1206 09:54:34.457438  797604 notify.go:221] Checking for updates...
	I1206 09:54:34.457872  797604 config.go:182] Loaded profile config "multinode-081526": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:54:34.457893  797604 status.go:174] checking status of multinode-081526 ...
	I1206 09:54:34.458451  797604 cli_runner.go:164] Run: docker container inspect multinode-081526 --format={{.State.Status}}
	I1206 09:54:34.477830  797604 status.go:371] multinode-081526 host status = "Running" (err=<nil>)
	I1206 09:54:34.477873  797604 host.go:66] Checking if "multinode-081526" exists ...
	I1206 09:54:34.478264  797604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-081526
	I1206 09:54:34.495831  797604 host.go:66] Checking if "multinode-081526" exists ...
	I1206 09:54:34.496243  797604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:54:34.496299  797604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-081526
	I1206 09:54:34.514733  797604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33317 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/multinode-081526/id_rsa Username:docker}
	I1206 09:54:34.605170  797604 ssh_runner.go:195] Run: systemctl --version
	I1206 09:54:34.611576  797604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:54:34.623708  797604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 09:54:34.679811  797604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:5 ContainersRunning:2 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-06 09:54:34.670229759 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:29.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1206 09:54:34.680399  797604 kubeconfig.go:125] found "multinode-081526" server: "https://192.168.67.2:8443"
	I1206 09:54:34.680429  797604 api_server.go:166] Checking apiserver status ...
	I1206 09:54:34.680461  797604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 09:54:34.692995  797604 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2174/cgroup
	W1206 09:54:34.701274  797604 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2174/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1206 09:54:34.701324  797604 ssh_runner.go:195] Run: ls
	I1206 09:54:34.705079  797604 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1206 09:54:34.709311  797604 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1206 09:54:34.709334  797604 status.go:463] multinode-081526 apiserver status = Running (err=<nil>)
	I1206 09:54:34.709345  797604 status.go:176] multinode-081526 status: &{Name:multinode-081526 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:54:34.709365  797604 status.go:174] checking status of multinode-081526-m02 ...
	I1206 09:54:34.709625  797604 cli_runner.go:164] Run: docker container inspect multinode-081526-m02 --format={{.State.Status}}
	I1206 09:54:34.727170  797604 status.go:371] multinode-081526-m02 host status = "Running" (err=<nil>)
	I1206 09:54:34.727201  797604 host.go:66] Checking if "multinode-081526-m02" exists ...
	I1206 09:54:34.727497  797604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-081526-m02
	I1206 09:54:34.744958  797604 host.go:66] Checking if "multinode-081526-m02" exists ...
	I1206 09:54:34.745273  797604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 09:54:34.745329  797604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-081526-m02
	I1206 09:54:34.764886  797604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33322 SSHKeyPath:/home/jenkins/minikube-integration/22047-555179/.minikube/machines/multinode-081526-m02/id_rsa Username:docker}
	I1206 09:54:34.856074  797604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 09:54:34.870027  797604 status.go:176] multinode-081526-m02 status: &{Name:multinode-081526-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:54:34.870071  797604 status.go:174] checking status of multinode-081526-m03 ...
	I1206 09:54:34.870430  797604 cli_runner.go:164] Run: docker container inspect multinode-081526-m03 --format={{.State.Status}}
	I1206 09:54:34.887999  797604 status.go:371] multinode-081526-m03 host status = "Stopped" (err=<nil>)
	I1206 09:54:34.888021  797604 status.go:384] host is not running, skipping remaining checks
	I1206 09:54:34.888027  797604 status.go:176] multinode-081526-m03 status: &{Name:multinode-081526-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-081526 node start m03 -v=5 --alsologtostderr: (8.835530438s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-081526
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-081526
E1206 09:54:49.805374  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-081526: (22.832189252s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081526 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081526 --wait=true -v=5 --alsologtostderr: (49.46564204s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-081526
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-081526 node delete m03: (4.675917339s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-081526 stop: (21.639788628s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081526 status: exit status 7 (102.114078ms)

                                                
                                                
-- stdout --
	multinode-081526
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-081526-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr: exit status 7 (96.309218ms)

                                                
                                                
-- stdout --
	multinode-081526
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-081526-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 09:56:23.916172  812491 out.go:360] Setting OutFile to fd 1 ...
	I1206 09:56:23.916287  812491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:23.916294  812491 out.go:374] Setting ErrFile to fd 2...
	I1206 09:56:23.916301  812491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 09:56:23.916490  812491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 09:56:23.916691  812491 out.go:368] Setting JSON to false
	I1206 09:56:23.916721  812491 mustload.go:66] Loading cluster: multinode-081526
	I1206 09:56:23.916860  812491 notify.go:221] Checking for updates...
	I1206 09:56:23.917182  812491 config.go:182] Loaded profile config "multinode-081526": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 09:56:23.917206  812491 status.go:174] checking status of multinode-081526 ...
	I1206 09:56:23.917688  812491 cli_runner.go:164] Run: docker container inspect multinode-081526 --format={{.State.Status}}
	I1206 09:56:23.936414  812491 status.go:371] multinode-081526 host status = "Stopped" (err=<nil>)
	I1206 09:56:23.936436  812491 status.go:384] host is not running, skipping remaining checks
	I1206 09:56:23.936442  812491 status.go:176] multinode-081526 status: &{Name:multinode-081526 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 09:56:23.936462  812491 status.go:174] checking status of multinode-081526-m02 ...
	I1206 09:56:23.936692  812491 cli_runner.go:164] Run: docker container inspect multinode-081526-m02 --format={{.State.Status}}
	I1206 09:56:23.953969  812491 status.go:371] multinode-081526-m02 host status = "Stopped" (err=<nil>)
	I1206 09:56:23.953987  812491 status.go:384] host is not running, skipping remaining checks
	I1206 09:56:23.953993  812491 status.go:176] multinode-081526-m02 status: &{Name:multinode-081526-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081526 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1206 09:57:00.097625  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081526 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (50.719086366s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-081526 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-081526
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081526-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-081526-m02 --driver=docker  --container-runtime=docker: exit status 14 (76.300661ms)

                                                
                                                
-- stdout --
	* [multinode-081526-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-081526-m02' is duplicated with machine name 'multinode-081526-m02' in profile 'multinode-081526'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-081526-m03 --driver=docker  --container-runtime=docker
E1206 09:57:25.081100  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-081526-m03 --driver=docker  --container-runtime=docker: (22.321399632s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-081526
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-081526: exit status 80 (288.791923ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-081526 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-081526-m03 already exists in multinode-081526-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-081526-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-081526-m03: (2.170368246s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.92s)

                                                
                                    
x
+
TestPreload (128.65s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-957282 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker
E1206 09:58:48.142066  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-957282 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker: (1m7.99451567s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-957282 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-957282 image pull gcr.io/k8s-minikube/busybox: (1.715906108s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-957282
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-957282: (10.833328488s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-957282 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E1206 09:59:49.803723  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-957282 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (45.685483336s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-957282 image list
helpers_test.go:175: Cleaning up "test-preload-957282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-957282
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-957282: (2.196992865s)
--- PASS: TestPreload (128.65s)

                                                
                                    
x
+
TestScheduledStopUnix (96.86s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-443758 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-443758 --memory=3072 --driver=docker  --container-runtime=docker: (23.57391737s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443758 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:00:16.582209  837409 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:00:16.582777  837409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:00:16.582798  837409 out.go:374] Setting ErrFile to fd 2...
	I1206 10:00:16.582804  837409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:00:16.583413  837409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 10:00:16.583974  837409 out.go:368] Setting JSON to false
	I1206 10:00:16.584070  837409 mustload.go:66] Loading cluster: scheduled-stop-443758
	I1206 10:00:16.584418  837409 config.go:182] Loaded profile config "scheduled-stop-443758": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 10:00:16.584479  837409 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/config.json ...
	I1206 10:00:16.584653  837409 mustload.go:66] Loading cluster: scheduled-stop-443758
	I1206 10:00:16.584746  837409 config.go:182] Loaded profile config "scheduled-stop-443758": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-443758 -n scheduled-stop-443758
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443758 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:00:16.985201  837555 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:00:16.985450  837555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:00:16.985458  837555 out.go:374] Setting ErrFile to fd 2...
	I1206 10:00:16.985463  837555 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:00:16.985627  837555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 10:00:16.985840  837555 out.go:368] Setting JSON to false
	I1206 10:00:16.986058  837555 daemonize_unix.go:73] killing process 837444 as it is an old scheduled stop
	I1206 10:00:16.986158  837555 mustload.go:66] Loading cluster: scheduled-stop-443758
	I1206 10:00:16.986578  837555 config.go:182] Loaded profile config "scheduled-stop-443758": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 10:00:16.986668  837555 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/config.json ...
	I1206 10:00:16.986862  837555 mustload.go:66] Loading cluster: scheduled-stop-443758
	I1206 10:00:16.987018  837555 config.go:182] Loaded profile config "scheduled-stop-443758": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 10:00:16.991666  558759 retry.go:31] will retry after 77.714µs: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:16.992852  558759 retry.go:31] will retry after 142.573µs: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:16.993966  558759 retry.go:31] will retry after 160.69µs: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:16.995111  558759 retry.go:31] will retry after 419.223µs: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:16.996238  558759 retry.go:31] will retry after 498.162µs: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:16.997370  558759 retry.go:31] will retry after 634.622µs: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:16.998494  558759 retry.go:31] will retry after 1.647515ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.000750  558759 retry.go:31] will retry after 2.325878ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.003957  558759 retry.go:31] will retry after 2.699759ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.007179  558759 retry.go:31] will retry after 4.822829ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.012399  558759 retry.go:31] will retry after 6.234796ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.019630  558759 retry.go:31] will retry after 11.706401ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.031879  558759 retry.go:31] will retry after 16.954644ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.049178  558759 retry.go:31] will retry after 10.00341ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.059444  558759 retry.go:31] will retry after 17.99803ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.077696  558759 retry.go:31] will retry after 24.360453ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
I1206 10:00:17.102922  558759 retry.go:31] will retry after 87.543569ms: open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443758 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-443758 -n scheduled-stop-443758
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-443758
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-443758 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 10:00:42.957083  838481 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:00:42.957322  838481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:00:42.957330  838481 out.go:374] Setting ErrFile to fd 2...
	I1206 10:00:42.957334  838481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:00:42.957534  838481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-555179/.minikube/bin
	I1206 10:00:42.957832  838481 out.go:368] Setting JSON to false
	I1206 10:00:42.957906  838481 mustload.go:66] Loading cluster: scheduled-stop-443758
	I1206 10:00:42.958267  838481 config.go:182] Loaded profile config "scheduled-stop-443758": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1206 10:00:42.958345  838481 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/scheduled-stop-443758/config.json ...
	I1206 10:00:42.958514  838481 mustload.go:66] Loading cluster: scheduled-stop-443758
	I1206 10:00:42.958622  838481 config.go:182] Loaded profile config "scheduled-stop-443758": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-443758
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-443758: exit status 7 (86.334301ms)

                                                
                                                
-- stdout --
	scheduled-stop-443758
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-443758 -n scheduled-stop-443758
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-443758 -n scheduled-stop-443758: exit status 7 (82.664921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-443758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-443758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-443758: (1.667627448s)
--- PASS: TestScheduledStopUnix (96.86s)

                                                
                                    
x
+
TestSkaffold (75.58s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3970693970 version
skaffold_test.go:63: skaffold version: v2.17.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-398934 --memory=3072 --driver=docker  --container-runtime=docker
E1206 10:01:43.168106  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-398934 --memory=3072 --driver=docker  --container-runtime=docker: (23.166602328s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3970693970 run --minikube-profile skaffold-398934 --kube-context skaffold-398934 --status-check=true --port-forward=false --interactive=false
E1206 10:02:00.096845  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:02:25.081096  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3970693970 run --minikube-profile skaffold-398934 --kube-context skaffold-398934 --status-check=true --port-forward=false --interactive=false: (37.314897956s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-7bf884b995-jwxd2" [59fdfd07-6fa8-4616-bda3-27a32d6fb7a2] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004319537s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-764bb59f77-rpcrk" [019e5f48-a37c-4fc2-af0a-8ec89fbe7adb] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003581583s
helpers_test.go:175: Cleaning up "skaffold-398934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-398934
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-398934: (3.222354815s)
--- PASS: TestSkaffold (75.58s)

                                                
                                    
x
+
TestInsufficientStorage (9.38s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-577009 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-577009 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.080455833s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2eb95db-a6a2-4eec-97c8-15c6bb433aab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-577009] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52b72683-623b-46ad-97a2-133afd13369a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"481d390a-4ba8-4b2d-9736-bb5f3c1b601d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d5443a89-26f9-4e8c-b3de-1cdb5145dcc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig"}}
	{"specversion":"1.0","id":"31fa0a09-254e-42f0-aa2d-3eaccbb01b08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube"}}
	{"specversion":"1.0","id":"a22fbe0c-7596-464d-8ec8-1a0580d710a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4a867e32-cd1d-4d0f-9bdf-f81c1fb23cf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9bfba0fb-d103-49a3-9012-6f874baa567c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"81fb4c35-7a84-41a6-a374-b0f32dbb9f77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4ad07e55-b098-4359-923e-fd239d13c8e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a269b81b-f71b-42ea-8b8a-a5499529547e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2303526c-85f5-4048-9f87-966af3f1c8f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-577009\" primary control-plane node in \"insufficient-storage-577009\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d806438-b7dd-4b8c-8348-01529224dd00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9ca8197-a2ba-4743-9556-b60f47903f2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bda52c1-088d-44cc-987a-8c8bb2dc439c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-577009 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-577009 --output=json --layout=cluster: exit status 7 (297.66187ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-577009","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-577009","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:02:52.753500  850433 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-577009" does not appear in /home/jenkins/minikube-integration/22047-555179/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-577009 --output=json --layout=cluster
E1206 10:02:52.868853  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-577009 --output=json --layout=cluster: exit status 7 (287.229959ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-577009","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-577009","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:02:53.041583  850544 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-577009" does not appear in /home/jenkins/minikube-integration/22047-555179/kubeconfig
	E1206 10:02:53.051619  850544 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/insufficient-storage-577009/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-577009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-577009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-577009: (1.710229552s)
--- PASS: TestInsufficientStorage (9.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (333.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.718227481 start -p running-upgrade-838003 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.718227481 start -p running-upgrade-838003 --memory=3072 --vm-driver=docker  --container-runtime=docker: (23.230933124s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-838003 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-838003 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5m6.017406254s)
helpers_test.go:175: Cleaning up "running-upgrade-838003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-838003
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-838003: (3.152778352s)
--- PASS: TestRunningBinaryUpgrade (333.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (338.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.698095147s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-419199
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-419199: (12.230801798s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-419199 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-419199 status --format={{.Host}}: exit status 7 (119.491881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m25.750420065s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-419199 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (81.227157ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-419199] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-419199
	    minikube start -p kubernetes-upgrade-419199 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4191992 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-419199 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-419199 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.140726782s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-419199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-419199
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-419199: (4.201325671s)
--- PASS: TestKubernetesUpgrade (338.28s)

                                                
                                    
x
+
TestMissingContainerUpgrade (70.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3945923825 start -p missing-upgrade-856324 --memory=3072 --driver=docker  --container-runtime=docker
E1206 10:07:51.640685  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3945923825 start -p missing-upgrade-856324 --memory=3072 --driver=docker  --container-runtime=docker: (22.11624445s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-856324
E1206 10:08:12.122776  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-856324: (10.491362773s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-856324
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-856324 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-856324 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.852798045s)
helpers_test.go:175: Cleaning up "missing-upgrade-856324" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-856324
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-856324: (2.505443189s)
--- PASS: TestMissingContainerUpgrade (70.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (320.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3397326451 start -p stopped-upgrade-757897 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3397326451 start -p stopped-upgrade-757897 --memory=3072 --vm-driver=docker  --container-runtime=docker: (49.076581359s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3397326451 -p stopped-upgrade-757897 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3397326451 -p stopped-upgrade-757897 stop: (10.809782823s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-757897 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-757897 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m21.018619107s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (320.91s)

                                                
                                    
x
+
TestPause/serial/Start (36.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-558376 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E1206 10:04:49.803511  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-558376 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (36.755469554s)
--- PASS: TestPause/serial/Start (36.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-558376 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-558376 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.088708665s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-558376 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-558376 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-558376 --output=json --layout=cluster: exit status 2 (327.949051ms)

                                                
                                                
-- stdout --
	{"Name":"pause-558376","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-558376","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-558376 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.48s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.56s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-558376 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.56s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-558376 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-558376 --alsologtostderr -v=5: (2.233198796s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (30.04s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (29.984291419s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-558376
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-558376: exit status 1 (18.326568ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-558376: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (30.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-873090 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-873090 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (86.819831ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-873090] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-555179/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-555179/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-873090 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-873090 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (21.073985923s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-873090 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-873090 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-873090 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (13.699780655s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-873090 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-873090 status -o json: exit status 2 (311.414313ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-873090","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-873090
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-873090: (1.836724691s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-873090 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-873090 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (8.566612008s)
--- PASS: TestNoKubernetes/serial/Start (8.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22047-555179/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-873090 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-873090 "sudo systemctl is-active --quiet service kubelet": exit status 1 (284.247828ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
E1206 10:07:00.096834  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.192412576s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-873090
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-873090: (1.278328387s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-873090 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-873090 --driver=docker  --container-runtime=docker: (7.245091861s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-873090 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-873090 "sudo systemctl is-active --quiet service kubelet": exit status 1 (325.389988ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-757897
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (73.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m13.486055338s)
--- PASS: TestNetworkPlugins/group/auto/Start (73.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (47.433466058s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E1206 10:09:49.803147  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (57.603120361s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-m7qzv" [01b71522-b191-44cd-84bc-1302abd4316f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003783922s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-741421 "pgrep -a kubelet"
I1206 10:10:00.472723  558759 config.go:182] Loaded profile config "kindnet-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rvmdn" [c04b7028-e93f-4f3f-b665-ab6827b5e0d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rvmdn" [c04b7028-e93f-4f3f-b665-ab6827b5e0d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003251007s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-741421 "pgrep -a kubelet"
I1206 10:10:14.724577  558759 config.go:182] Loaded profile config "auto-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qzcc7" [c28b24bf-a1fe-422b-80c7-e437e8da02e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1206 10:10:15.005669  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-qzcc7" [c28b24bf-a1fe-422b-80c7-e437e8da02e6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003028712s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-pg8lh" [7367da9e-f156-46e6-b9c2-c5e15006a84b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004276128s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (38.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (38.930191314s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (38.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-741421 "pgrep -a kubelet"
I1206 10:10:33.129331  558759 config.go:182] Loaded profile config "calico-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5wlbh" [bd388010-9f2a-4488-b411-bc6195a6d35c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5wlbh" [bd388010-9f2a-4488-b411-bc6195a6d35c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003974685s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (67.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m7.675957725s)
--- PASS: TestNetworkPlugins/group/false/Start (67.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (40.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (40.607922405s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (40.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-741421 "pgrep -a kubelet"
I1206 10:11:10.647498  558759 config.go:182] Loaded profile config "custom-flannel-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-86zkw" [35ee66ea-fc17-4738-96a7-ab2ed569d592] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-86zkw" [35ee66ea-fc17-4738-96a7-ab2ed569d592] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005088558s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (34.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (34.651254543s)
--- PASS: TestNetworkPlugins/group/flannel/Start (34.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-741421 "pgrep -a kubelet"
I1206 10:11:47.304449  558759 config.go:182] Loaded profile config "enable-default-cni-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9gqbl" [52e6fbc4-315b-4ad3-8d7a-239b9f47eee3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9gqbl" [52e6fbc4-315b-4ad3-8d7a-239b9f47eee3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004507031s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-741421 "pgrep -a kubelet"
I1206 10:11:53.910894  558759 config.go:182] Loaded profile config "false-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cfwbr" [717a4461-746a-4276-8349-ea0748d0a858] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cfwbr" [717a4461-746a-4276-8349-ea0748d0a858] Running
E1206 10:12:00.096963  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/addons-397143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.003788039s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-5nbnm" [1dbeaf51-9f71-424f-af1f-00ca0cc48dcf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005763134s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m13.561185414s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-741421 "pgrep -a kubelet"
I1206 10:12:22.434933  558759 config.go:182] Loaded profile config "flannel-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fvtrn" [3d9208e2-e7a2-41c4-bb6c-ee3672415e8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fvtrn" [3d9208e2-e7a2-41c4-bb6c-ee3672415e8d] Running
E1206 10:12:31.141869  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003608912s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (69.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1206 10:12:25.079105  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-741421 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m9.432446953s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (69.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (43.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-561689 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-561689 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (43.216035865s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (43.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-083111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
E1206 10:12:58.847198  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/skaffold-398934/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-083111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (1m6.063865094s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-561689 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e7a38d15-ed31-465d-8456-f9df6ef4444d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e7a38d15-ed31-465d-8456-f9df6ef4444d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004546532s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-561689 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-561689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-561689 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-561689 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-561689 --alsologtostderr -v=3: (11.019731535s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-741421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561689 -n old-k8s-version-561689
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561689 -n old-k8s-version-561689: exit status 7 (93.201861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-561689 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (46.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-561689 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
I1206 10:13:31.387517  558759 config.go:182] Loaded profile config "bridge-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-561689 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (45.872889738s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-561689 -n old-k8s-version-561689
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (46.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5hdf2" [0eaa65e9-7bd0-4fb2-b802-9d94a5599020] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5hdf2" [0eaa65e9-7bd0-4fb2-b802-9d94a5599020] Running
I1206 10:13:34.659264  558759 config.go:182] Loaded profile config "kubenet-741421": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004461452s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-741421 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-741421 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x2z8m" [3a849a73-15de-474f-8bb0-5bc50ae44a8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x2z8m" [3a849a73-15de-474f-8bb0-5bc50ae44a8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.003810576s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-741421 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-741421 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-407677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-407677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (1m11.818236356s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-083111 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d6c07bd8-68d4-4216-b372-4b41a9ea9037] Pending
helpers_test.go:352: "busybox" [d6c07bd8-68d4-4216-b372-4b41a9ea9037] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d6c07bd8-68d4-4216-b372-4b41a9ea9037] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004635078s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-083111 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-807643 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-807643 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (1m8.144686323s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-083111 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-083111 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-083111 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-083111 --alsologtostderr -v=3: (11.094851566s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hnnxj" [5762cd24-2224-4b3d-8762-60b93979330a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004112714s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hnnxj" [5762cd24-2224-4b3d-8762-60b93979330a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003353345s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-561689 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-083111 -n no-preload-083111
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-083111 -n no-preload-083111: exit status 7 (105.147251ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-083111 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-083111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-083111 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (52.133691754s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-083111 -n no-preload-083111
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-561689 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-561689 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561689 -n old-k8s-version-561689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561689 -n old-k8s-version-561689: exit status 2 (368.9883ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-561689 -n old-k8s-version-561689
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-561689 -n old-k8s-version-561689: exit status 2 (428.980135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-561689 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-561689 -n old-k8s-version-561689
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-561689 -n old-k8s-version-561689
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-347996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
E1206 10:14:49.803779  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-059985/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.131157  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.137551  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.148949  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.173052  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.214357  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.295733  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.458012  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:54.779819  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:55.421894  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:56.703968  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:14:59.265719  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-347996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (26.38356279s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-347996 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-347996 --alsologtostderr -v=3
E1206 10:15:04.387063  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-347996 --alsologtostderr -v=3: (10.996343086s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-407677 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e3b9666-6c02-417d-8f44-951e4fcac410] Pending
helpers_test.go:352: "busybox" [3e3b9666-6c02-417d-8f44-951e4fcac410] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e3b9666-6c02-417d-8f44-951e4fcac410] Running
E1206 10:15:15.224192  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:15.545981  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:16.188277  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00421416s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-407677 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347996 -n newest-cni-347996
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347996 -n newest-cni-347996: exit status 7 (86.790874ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-347996 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-347996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-347996 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.35.0-beta.0: (12.007727606s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-347996 -n newest-cni-347996
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-807643 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7593faa0-c7d5-490c-ae6d-b5ac9771fdec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1206 10:15:14.628641  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:14.900487  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:14.906883  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:14.918328  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:14.939739  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:14.981152  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:15.062838  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [7593faa0-c7d5-490c-ae6d-b5ac9771fdec] Running
E1206 10:15:17.470437  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003469709s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-807643 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-hbwjv" [f23da7cf-15de-4120-8731-9fb05cb5aac6] Running
E1206 10:15:20.032560  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004435641s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-407677 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-407677 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-407677 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-407677 --alsologtostderr -v=3: (11.052169935s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-807643 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-807643 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-807643 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-807643 --alsologtostderr -v=3: (11.110126941s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-hbwjv" [f23da7cf-15de-4120-8731-9fb05cb5aac6] Running
E1206 10:15:25.154740  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003650952s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-083111 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-347996 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-347996 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-347996 -n newest-cni-347996
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-347996 -n newest-cni-347996: exit status 2 (337.865382ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-347996 -n newest-cni-347996
E1206 10:15:26.805172  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:26.811562  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:26.822962  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:26.844408  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:26.885836  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:26.967267  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-347996 -n newest-cni-347996: exit status 2 (314.313386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-347996 --alsologtostderr -v=1
E1206 10:15:27.128561  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:27.451236  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-347996 -n newest-cni-347996
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-347996 -n newest-cni-347996
E1206 10:15:28.093271  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:28.143680  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/functional-326239/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-083111 image list --format=json
E1206 10:15:29.375190  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-083111 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-083111 -n no-preload-083111
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-083111 -n no-preload-083111: exit status 2 (321.505087ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-083111 -n no-preload-083111
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-083111 -n no-preload-083111: exit status 2 (321.917354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-083111 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-083111 -n no-preload-083111
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-083111 -n no-preload-083111
E1206 10:15:31.936700  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-407677 -n embed-certs-407677
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-407677 -n embed-certs-407677: exit status 7 (87.343702ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-407677 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-407677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-407677 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (46.619673785s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-407677 -n embed-certs-407677
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643: exit status 7 (83.419935ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-807643 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1206 10:15:35.396713  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-807643 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2
E1206 10:15:37.058073  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:47.300225  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:15:55.878410  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/auto-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:07.782276  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/calico-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:10.877148  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:10.883599  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:10.895034  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:10.916478  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:10.957930  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:11.039439  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:11.201183  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:11.522893  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:12.165040  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:13.447310  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:16.009116  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:16:16.071821  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/kindnet-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-807643 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.2: (45.325590816s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643
E1206 10:16:21.131244  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-np5lp" [072c22d3-91be-4792-8a2b-8e515909ae5a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004441499s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d8xdm" [9c436d9a-67ac-4673-90d8-2e9c6cdd6052] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002901539s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-np5lp" [072c22d3-91be-4792-8a2b-8e515909ae5a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00379384s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-407677 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d8xdm" [9c436d9a-67ac-4673-90d8-2e9c6cdd6052] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002947624s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-807643 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-407677 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-407677 --alsologtostderr -v=1
E1206 10:16:31.373356  558759 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-555179/.minikube/profiles/custom-flannel-741421/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-407677 -n embed-certs-407677
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-407677 -n embed-certs-407677: exit status 2 (321.263483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-407677 -n embed-certs-407677
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-407677 -n embed-certs-407677: exit status 2 (315.89931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-407677 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-407677 -n embed-certs-407677
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-407677 -n embed-certs-407677
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-807643 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-807643 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643: exit status 2 (340.458709ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643: exit status 2 (422.114736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-807643 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-807643 -n default-k8s-diff-port-807643
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                    

Test skip (29/434)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
117 TestFunctional/parallel/PodmanEnv 0
160 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
161 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
162 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
263 TestGvisorAddon 0
292 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
293 TestISOImage 0
357 TestChangeNoneUser 0
360 TestScheduledStopWindows 0
379 TestNetworkPlugins/group/cilium 9.33
387 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-741421 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-741421" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-741421

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-741421" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741421"

                                                
                                                
----------------------- debugLogs end: cilium-741421 [took: 9.13623418s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-741421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-741421
--- SKIP: TestNetworkPlugins/group/cilium (9.33s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-804444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-804444
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard