Test Report: Docker_Linux 21767

                    
                      792b73f7e6a323c75f1a3ad863987d7e01fd8059:2025-10-25:42055
                    
                

Test fail (7/345)

Order failed test Duration
37 TestAddons/parallel/Ingress 491.39
90 TestFunctional/parallel/DashboardCmd 302.06
99 TestFunctional/parallel/PersistentVolumeClaim 368.81
103 TestFunctional/parallel/MySQL 602.53
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 240.7
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 106.87
256 TestScheduledStopUnix 27.47
x
+
TestAddons/parallel/Ingress (491.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-456159 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-456159 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-456159 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [214873df-6ea5-49a2-84da-134b3e4e1ab7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-456159 -n addons-456159
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-25 09:27:27.07060045 +0000 UTC m=+708.630419761
addons_test.go:252: (dbg) Run:  kubectl --context addons-456159 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-456159 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-456159/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:19:26 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.36
IPs:
IP:  10.244.0.36
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcgwn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lcgwn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m1s                    default-scheduler  Successfully assigned default/nginx to addons-456159
Warning  Failed     6m30s (x2 over 8m)      kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    5m (x5 over 8m)         kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m (x5 over 8m)         kubelet            Error: ErrImagePull
Warning  Failed     5m (x3 over 7m49s)      kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m58s (x21 over 7m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     2m58s (x21 over 7m59s)  kubelet            Error: ImagePullBackOff
addons_test.go:252: (dbg) Run:  kubectl --context addons-456159 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-456159 logs nginx -n default: exit status 1 (71.1786ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-456159 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-456159
helpers_test.go:243: (dbg) docker inspect addons-456159:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "12dc5e0f7cfb44089ee82608d36d4fa826cdb0dd50ba42df521e33ae2926b0df",
	        "Created": "2025-10-25T09:16:12.907324409Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 505310,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:16:12.940135587Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/12dc5e0f7cfb44089ee82608d36d4fa826cdb0dd50ba42df521e33ae2926b0df/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/12dc5e0f7cfb44089ee82608d36d4fa826cdb0dd50ba42df521e33ae2926b0df/hostname",
	        "HostsPath": "/var/lib/docker/containers/12dc5e0f7cfb44089ee82608d36d4fa826cdb0dd50ba42df521e33ae2926b0df/hosts",
	        "LogPath": "/var/lib/docker/containers/12dc5e0f7cfb44089ee82608d36d4fa826cdb0dd50ba42df521e33ae2926b0df/12dc5e0f7cfb44089ee82608d36d4fa826cdb0dd50ba42df521e33ae2926b0df-json.log",
	        "Name": "/addons-456159",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-456159:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-456159",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "12dc5e0f7cfb44089ee82608d36d4fa826cdb0dd50ba42df521e33ae2926b0df",
	                "LowerDir": "/var/lib/docker/overlay2/1f4d1a6fa648c2b9af394007faa868fa437abfe6c11e9544e16781d1308b7d79-init/diff:/var/lib/docker/overlay2/1190de5deda7780238bce4a73ddfc02156e176e9e10c91e09b0cabf2c2920025/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f4d1a6fa648c2b9af394007faa868fa437abfe6c11e9544e16781d1308b7d79/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f4d1a6fa648c2b9af394007faa868fa437abfe6c11e9544e16781d1308b7d79/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f4d1a6fa648c2b9af394007faa868fa437abfe6c11e9544e16781d1308b7d79/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-456159",
	                "Source": "/var/lib/docker/volumes/addons-456159/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-456159",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-456159",
	                "name.minikube.sigs.k8s.io": "addons-456159",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80da5a0490ff86188d28d40d188e893bc7b9c371bd5d9edfc42c3643489e4c74",
	            "SandboxKey": "/var/run/docker/netns/80da5a0490ff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-456159": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:00:36:9e:3c:4b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a967d243d9d4c8348bbb4de01016769cf3d3e3d6bd7ee2d305388c5ea30c0f7e",
	                    "EndpointID": "983b1813dfd753ad76631da7c5c8927a79d0ff2f6fa3f8b749923c083b0a163e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-456159",
	                        "12dc5e0f7cfb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-456159 -n addons-456159
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 logs -n 25
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                    ARGS                                                                                                                                                                                                                                    │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p download-docker-718888 --alsologtostderr --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-718888 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ -p download-docker-718888                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-docker-718888 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ --download-only -p binary-mirror-145493 --alsologtostderr --binary-mirror http://127.0.0.1:36883 --driver=docker  --container-runtime=docker                                                                                                                                                                                                                                                                                                                               │ binary-mirror-145493   │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ -p binary-mirror-145493                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ binary-mirror-145493   │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ addons  │ enable dashboard -p addons-456159                                                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ addons  │ disable dashboard -p addons-456159                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ start   │ -p addons-456159 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:18 UTC │
	│ addons  │ addons-456159 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:18 UTC │ 25 Oct 25 09:18 UTC │
	│ addons  │ addons-456159 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:18 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ enable headlamp -p addons-456159 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ ssh     │ addons-456159 ssh cat /opt/local-path-provisioner/pvc-b94f5b89-0c64-4b51-b2a3-1c6e15972da1_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                          │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                            │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ ip      │ addons-456159 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-456159                                                                                                                                                                                                                                                                                                                                                                                             │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                          │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:19 UTC │
	│ addons  │ addons-456159 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-456159          │ jenkins │ v1.37.0 │ 25 Oct 25 09:19 UTC │ 25 Oct 25 09:20 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:49.694245  504676 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:49.694416  504676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:49.694429  504676 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:49.694435  504676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:49.694691  504676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:15:49.695314  504676 out.go:368] Setting JSON to false
	I1025 09:15:49.696313  504676 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3498,"bootTime":1761380252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:49.696410  504676 start.go:141] virtualization: kvm guest
	I1025 09:15:49.698227  504676 out.go:179] * [addons-456159] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:49.699481  504676 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:15:49.699500  504676 notify.go:220] Checking for updates...
	I1025 09:15:49.701729  504676 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:49.702836  504676 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:15:49.704037  504676 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:15:49.705918  504676 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:15:49.707324  504676 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:15:49.708911  504676 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:49.733124  504676 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:49.733235  504676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:49.790368  504676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 09:15:49.780698191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:49.790499  504676 docker.go:318] overlay module found
	I1025 09:15:49.792278  504676 out.go:179] * Using the docker driver based on user configuration
	I1025 09:15:49.793443  504676 start.go:305] selected driver: docker
	I1025 09:15:49.793463  504676 start.go:925] validating driver "docker" against <nil>
	I1025 09:15:49.793482  504676 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:15:49.794358  504676 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:49.849974  504676 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-25 09:15:49.839925234 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:49.850135  504676 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:15:49.850353  504676 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:15:49.852148  504676 out.go:179] * Using Docker driver with root privileges
	I1025 09:15:49.853314  504676 cni.go:84] Creating CNI manager for ""
	I1025 09:15:49.853395  504676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:15:49.853412  504676 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 09:15:49.853507  504676 start.go:349] cluster config:
	{Name:addons-456159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-456159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I1025 09:15:49.854877  504676 out.go:179] * Starting "addons-456159" primary control-plane node in "addons-456159" cluster
	I1025 09:15:49.856050  504676 cache.go:123] Beginning downloading kic base image for docker with docker
	I1025 09:15:49.857234  504676 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:49.858464  504676 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:15:49.858516  504676 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1025 09:15:49.858525  504676 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:49.858626  504676 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:49.858680  504676 preload.go:233] Found /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 09:15:49.858695  504676 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1025 09:15:49.859039  504676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/config.json ...
	I1025 09:15:49.859068  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/config.json: {Name:mkff7838f336e64d94f07d4680e27ed4fda6acf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:49.875552  504676 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:15:49.875721  504676 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:15:49.875751  504676 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:15:49.875756  504676 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:15:49.875765  504676 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:15:49.875769  504676 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 09:16:02.105540  504676 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 09:16:02.105623  504676 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:16:02.105672  504676 start.go:360] acquireMachinesLock for addons-456159: {Name:mk451b7b0b7f63f67dc20f4f4a2ed31e1a5f3d6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:16:02.105798  504676 start.go:364] duration metric: took 88.346µs to acquireMachinesLock for "addons-456159"
	I1025 09:16:02.105826  504676 start.go:93] Provisioning new machine with config: &{Name:addons-456159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-456159 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 09:16:02.105910  504676 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:16:02.107500  504676 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 09:16:02.107788  504676 start.go:159] libmachine.API.Create for "addons-456159" (driver="docker")
	I1025 09:16:02.107828  504676 client.go:168] LocalClient.Create starting
	I1025 09:16:02.107967  504676 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem
	I1025 09:16:02.200035  504676 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem
	I1025 09:16:02.776268  504676 cli_runner.go:164] Run: docker network inspect addons-456159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:16:02.794089  504676 cli_runner.go:211] docker network inspect addons-456159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:16:02.794189  504676 network_create.go:284] running [docker network inspect addons-456159] to gather additional debugging logs...
	I1025 09:16:02.794220  504676 cli_runner.go:164] Run: docker network inspect addons-456159
	W1025 09:16:02.812508  504676 cli_runner.go:211] docker network inspect addons-456159 returned with exit code 1
	I1025 09:16:02.812539  504676 network_create.go:287] error running [docker network inspect addons-456159]: docker network inspect addons-456159: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-456159 not found
	I1025 09:16:02.812554  504676 network_create.go:289] output of [docker network inspect addons-456159]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-456159 not found
	
	** /stderr **
	I1025 09:16:02.812693  504676 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:16:02.830074  504676 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a7b910}
	I1025 09:16:02.830124  504676 network_create.go:124] attempt to create docker network addons-456159 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 09:16:02.830174  504676 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-456159 addons-456159
	I1025 09:16:02.886887  504676 network_create.go:108] docker network addons-456159 192.168.49.0/24 created
	I1025 09:16:02.886926  504676 kic.go:121] calculated static IP "192.168.49.2" for the "addons-456159" container
	I1025 09:16:02.887031  504676 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:16:02.904072  504676 cli_runner.go:164] Run: docker volume create addons-456159 --label name.minikube.sigs.k8s.io=addons-456159 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:16:02.922700  504676 oci.go:103] Successfully created a docker volume addons-456159
	I1025 09:16:02.922793  504676 cli_runner.go:164] Run: docker run --rm --name addons-456159-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-456159 --entrypoint /usr/bin/test -v addons-456159:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:16:08.958899  504676 cli_runner.go:217] Completed: docker run --rm --name addons-456159-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-456159 --entrypoint /usr/bin/test -v addons-456159:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.036026087s)
	I1025 09:16:08.958952  504676 oci.go:107] Successfully prepared a docker volume addons-456159
	I1025 09:16:08.958989  504676 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:16:08.959019  504676 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:16:08.959122  504676 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-456159:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:16:12.833190  504676 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-456159:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.873889762s)
	I1025 09:16:12.833238  504676 kic.go:203] duration metric: took 3.87421537s to extract preloaded images to volume ...
	W1025 09:16:12.833349  504676 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 09:16:12.833384  504676 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 09:16:12.833423  504676 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:16:12.891202  504676 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-456159 --name addons-456159 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-456159 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-456159 --network addons-456159 --ip 192.168.49.2 --volume addons-456159:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:16:13.156548  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Running}}
	I1025 09:16:13.173625  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:13.192709  504676 cli_runner.go:164] Run: docker exec addons-456159 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:16:13.239282  504676 oci.go:144] the created container "addons-456159" has a running status.
	I1025 09:16:13.239315  504676 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa...
	I1025 09:16:13.421379  504676 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:16:13.449797  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:13.477691  504676 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:16:13.477726  504676 kic_runner.go:114] Args: [docker exec --privileged addons-456159 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:16:13.528823  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:13.549759  504676 machine.go:93] provisionDockerMachine start ...
	I1025 09:16:13.549850  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:13.570761  504676 main.go:141] libmachine: Using SSH client type: native
	I1025 09:16:13.571004  504676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1025 09:16:13.571016  504676 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:16:13.715751  504676 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-456159
	
	I1025 09:16:13.715784  504676 ubuntu.go:182] provisioning hostname "addons-456159"
	I1025 09:16:13.715841  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:13.735804  504676 main.go:141] libmachine: Using SSH client type: native
	I1025 09:16:13.736105  504676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1025 09:16:13.736125  504676 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-456159 && echo "addons-456159" | sudo tee /etc/hostname
	I1025 09:16:13.889727  504676 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-456159
	
	I1025 09:16:13.889832  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:13.907496  504676 main.go:141] libmachine: Using SSH client type: native
	I1025 09:16:13.907754  504676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1025 09:16:13.907776  504676 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-456159' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-456159/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-456159' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:16:14.049432  504676 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:16:14.049461  504676 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-499776/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-499776/.minikube}
	I1025 09:16:14.049481  504676 ubuntu.go:190] setting up certificates
	I1025 09:16:14.049500  504676 provision.go:84] configureAuth start
	I1025 09:16:14.049570  504676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-456159
	I1025 09:16:14.067385  504676 provision.go:143] copyHostCerts
	I1025 09:16:14.067454  504676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-499776/.minikube/ca.pem (1082 bytes)
	I1025 09:16:14.067614  504676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-499776/.minikube/cert.pem (1123 bytes)
	I1025 09:16:14.067697  504676 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-499776/.minikube/key.pem (1679 bytes)
	I1025 09:16:14.067753  504676 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-499776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca-key.pem org=jenkins.addons-456159 san=[127.0.0.1 192.168.49.2 addons-456159 localhost minikube]
	I1025 09:16:14.131143  504676 provision.go:177] copyRemoteCerts
	I1025 09:16:14.131203  504676 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:16:14.131241  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:14.149923  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:14.251319  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:16:14.271844  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:16:14.290279  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:16:14.308676  504676 provision.go:87] duration metric: took 259.157751ms to configureAuth
	I1025 09:16:14.308714  504676 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:16:14.308909  504676 config.go:182] Loaded profile config "addons-456159": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:16:14.308965  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:14.328194  504676 main.go:141] libmachine: Using SSH client type: native
	I1025 09:16:14.328420  504676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1025 09:16:14.328434  504676 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 09:16:14.472753  504676 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 09:16:14.472774  504676 ubuntu.go:71] root file system type: overlay
	I1025 09:16:14.472903  504676 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 09:16:14.472960  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:14.491272  504676 main.go:141] libmachine: Using SSH client type: native
	I1025 09:16:14.491485  504676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1025 09:16:14.491558  504676 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 09:16:14.647600  504676 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 09:16:14.647698  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:14.665722  504676 main.go:141] libmachine: Using SSH client type: native
	I1025 09:16:14.665949  504676 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I1025 09:16:14.665967  504676 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 09:16:15.875037  504676 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:15:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-25 09:16:14.644643080 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 09:16:15.875085  504676 machine.go:96] duration metric: took 2.325297045s to provisionDockerMachine
	I1025 09:16:15.875104  504676 client.go:171] duration metric: took 13.76726395s to LocalClient.Create
	I1025 09:16:15.875133  504676 start.go:167] duration metric: took 13.767344088s to libmachine.API.Create "addons-456159"
	I1025 09:16:15.875148  504676 start.go:293] postStartSetup for "addons-456159" (driver="docker")
	I1025 09:16:15.875165  504676 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:16:15.875240  504676 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:16:15.875293  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:15.892986  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:15.997766  504676 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:16:16.001593  504676 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:16:16.001642  504676 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:16:16.001656  504676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-499776/.minikube/addons for local assets ...
	I1025 09:16:16.001715  504676 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-499776/.minikube/files for local assets ...
	I1025 09:16:16.001741  504676 start.go:296] duration metric: took 126.58324ms for postStartSetup
	I1025 09:16:16.002032  504676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-456159
	I1025 09:16:16.020181  504676 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/config.json ...
	I1025 09:16:16.020485  504676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:16:16.020531  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:16.038077  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:16.137336  504676 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:16:16.142385  504676 start.go:128] duration metric: took 14.036452874s to createHost
	I1025 09:16:16.142458  504676 start.go:83] releasing machines lock for "addons-456159", held for 14.036639094s
	I1025 09:16:16.142538  504676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-456159
	I1025 09:16:16.160287  504676 ssh_runner.go:195] Run: cat /version.json
	I1025 09:16:16.160341  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:16.160393  504676 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:16:16.160484  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:16.178372  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:16.179672  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:16.276327  504676 ssh_runner.go:195] Run: systemctl --version
	I1025 09:16:16.330667  504676 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:16:16.335956  504676 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:16:16.336031  504676 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:16:16.364058  504676 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 09:16:16.364099  504676 start.go:495] detecting cgroup driver to use...
	I1025 09:16:16.364133  504676 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:16:16.364271  504676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:16:16.380116  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1025 09:16:16.391865  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 09:16:16.402343  504676 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1025 09:16:16.402416  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1025 09:16:16.412553  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 09:16:16.422261  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 09:16:16.431711  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 09:16:16.441179  504676 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:16:16.450068  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 09:16:16.459759  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 09:16:16.469027  504676 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 09:16:16.478930  504676 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:16:16.487301  504676 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:16:16.495198  504676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:16:16.578658  504676 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 09:16:16.653571  504676 start.go:495] detecting cgroup driver to use...
	I1025 09:16:16.653665  504676 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 09:16:16.653728  504676 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 09:16:16.668776  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:16:16.682930  504676 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:16:16.700699  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:16:16.713760  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 09:16:16.727516  504676 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:16:16.742719  504676 ssh_runner.go:195] Run: which cri-dockerd
	I1025 09:16:16.746777  504676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 09:16:16.756538  504676 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1025 09:16:16.770289  504676 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 09:16:16.854642  504676 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 09:16:16.936407  504676 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I1025 09:16:16.936556  504676 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1025 09:16:16.950203  504676 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1025 09:16:16.963521  504676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:16:17.040773  504676 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 09:16:17.829451  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:16:17.842377  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 09:16:17.855825  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 09:16:17.869564  504676 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 09:16:17.955215  504676 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 09:16:18.040314  504676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:16:18.120842  504676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 09:16:18.144357  504676 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1025 09:16:18.158281  504676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:16:18.239410  504676 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 09:16:18.313846  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 09:16:18.327323  504676 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 09:16:18.327403  504676 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 09:16:18.331440  504676 start.go:563] Will wait 60s for crictl version
	I1025 09:16:18.331498  504676 ssh_runner.go:195] Run: which crictl
	I1025 09:16:18.335332  504676 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:16:18.362116  504676 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1025 09:16:18.362197  504676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 09:16:18.388812  504676 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 09:16:18.418145  504676 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1025 09:16:18.418267  504676 cli_runner.go:164] Run: docker network inspect addons-456159 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:16:18.435722  504676 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:16:18.439939  504676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:16:18.450676  504676 kubeadm.go:883] updating cluster {Name:addons-456159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-456159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:16:18.450815  504676 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 09:16:18.450862  504676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 09:16:18.472297  504676 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 09:16:18.472325  504676 docker.go:621] Images already preloaded, skipping extraction
	I1025 09:16:18.472391  504676 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 09:16:18.493120  504676 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 09:16:18.493142  504676 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:16:18.493152  504676 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 docker true true} ...
	I1025 09:16:18.493260  504676 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-456159 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-456159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:16:18.493319  504676 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 09:16:18.545526  504676 cni.go:84] Creating CNI manager for ""
	I1025 09:16:18.545573  504676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:16:18.545638  504676 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:16:18.545673  504676 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-456159 NodeName:addons-456159 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:16:18.545834  504676 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-456159"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:16:18.545910  504676 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:16:18.554439  504676 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:16:18.554531  504676 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:16:18.562638  504676 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1025 09:16:18.576145  504676 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:16:18.589475  504676 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1025 09:16:18.602932  504676 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:16:18.606920  504676 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:16:18.617656  504676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:16:18.699668  504676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:16:18.726368  504676 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159 for IP: 192.168.49.2
	I1025 09:16:18.726400  504676 certs.go:195] generating shared ca certs ...
	I1025 09:16:18.726427  504676 certs.go:227] acquiring lock for ca certs: {Name:mk591f43cf4589df71f5cb0e6167ddf369a67a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:18.726567  504676 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-499776/.minikube/ca.key
	I1025 09:16:19.443850  504676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt ...
	I1025 09:16:19.443883  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt: {Name:mk56167389a99b2fe4451e1f5d55b9146625e8f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:19.444073  504676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/ca.key ...
	I1025 09:16:19.444085  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/ca.key: {Name:mk844fbb6e1f3bcdf8cbb88fc47fc5e20d54a432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:19.444159  504676 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.key
	I1025 09:16:19.695444  504676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.crt ...
	I1025 09:16:19.695472  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.crt: {Name:mkdf4eeb0c4260135ef1771335b7229afb1bc499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:19.696524  504676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.key ...
	I1025 09:16:19.696551  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.key: {Name:mka1d78173cc51f827b5b7434cc3bea43191134e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:19.696666  504676 certs.go:257] generating profile certs ...
	I1025 09:16:19.696733  504676 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.key
	I1025 09:16:19.696748  504676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt with IP's: []
	I1025 09:16:19.962753  504676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt ...
	I1025 09:16:19.962788  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: {Name:mk08e80d0ad006d5d8971b89ebad3fc0b4163cc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:19.962964  504676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.key ...
	I1025 09:16:19.962976  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.key: {Name:mk833ea29b88ee275c8c5cd305318b9f6b82d224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:19.963054  504676 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.key.69782f69
	I1025 09:16:19.963070  504676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.crt.69782f69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 09:16:20.027254  504676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.crt.69782f69 ...
	I1025 09:16:20.027284  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.crt.69782f69: {Name:mkf839594ad073322f5e59bffb7144a16fc2cf23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:20.027447  504676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.key.69782f69 ...
	I1025 09:16:20.027460  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.key.69782f69: {Name:mkef56b41c65343b464df7ae3f5125fae77e0e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:20.027526  504676 certs.go:382] copying /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.crt.69782f69 -> /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.crt
	I1025 09:16:20.027616  504676 certs.go:386] copying /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.key.69782f69 -> /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.key
	I1025 09:16:20.027661  504676 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.key
	I1025 09:16:20.027681  504676 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.crt with IP's: []
	I1025 09:16:20.526554  504676 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.crt ...
	I1025 09:16:20.526601  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.crt: {Name:mk8c3e7bd0081723eb86214585b091fb96af5488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:20.526784  504676 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.key ...
	I1025 09:16:20.526799  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.key: {Name:mkce624c7bc094365c9b971a9e2d9764f20cdc4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:20.526993  504676 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:16:20.527028  504676 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:16:20.527048  504676 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:16:20.527070  504676 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/key.pem (1679 bytes)
	I1025 09:16:20.527771  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:16:20.546745  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:16:20.565403  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:16:20.584850  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:16:20.603360  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:16:20.621484  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:16:20.639058  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:16:20.656444  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:16:20.674533  504676 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:16:20.696162  504676 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:16:20.709063  504676 ssh_runner.go:195] Run: openssl version
	I1025 09:16:20.715283  504676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:16:20.726761  504676 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:16:20.730807  504676 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:16:20.730892  504676 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:16:20.765695  504676 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:16:20.775217  504676 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:16:20.779327  504676 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:16:20.779379  504676 kubeadm.go:400] StartCluster: {Name:addons-456159 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-456159 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:16:20.779492  504676 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 09:16:20.799218  504676 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:16:20.807970  504676 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:16:20.816565  504676 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:16:20.816636  504676 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:16:20.824703  504676 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:16:20.824721  504676 kubeadm.go:157] found existing configuration files:
	
	I1025 09:16:20.824761  504676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:16:20.832774  504676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:16:20.832826  504676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:16:20.840418  504676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:16:20.848233  504676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:16:20.848322  504676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:16:20.856399  504676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:16:20.864113  504676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:16:20.864173  504676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:16:20.871867  504676 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:16:20.879937  504676 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:16:20.879997  504676 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:16:20.887721  504676 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:16:20.926270  504676 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:16:20.926330  504676 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:16:20.961091  504676 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:16:20.961188  504676 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 09:16:20.961241  504676 kubeadm.go:318] OS: Linux
	I1025 09:16:20.961328  504676 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:16:20.961399  504676 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:16:20.961469  504676 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:16:20.961530  504676 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:16:20.961630  504676 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:16:20.961721  504676 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:16:20.961791  504676 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:16:20.961848  504676 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 09:16:21.021881  504676 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:16:21.022022  504676 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:16:21.022176  504676 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:16:21.034225  504676 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:16:21.037277  504676 out.go:252]   - Generating certificates and keys ...
	I1025 09:16:21.037380  504676 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:16:21.037492  504676 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:16:21.262743  504676 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:16:21.611882  504676 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:16:21.720280  504676 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:16:21.815346  504676 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:16:22.072712  504676 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:16:22.072880  504676 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-456159 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:16:22.515093  504676 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:16:22.515262  504676 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-456159 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:16:22.805366  504676 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:16:22.999978  504676 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:16:23.154183  504676 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:16:23.154289  504676 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:16:23.893458  504676 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:16:24.050011  504676 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:16:24.341113  504676 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:16:24.829301  504676 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:16:24.992838  504676 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:16:24.993217  504676 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:16:24.997048  504676 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:16:24.998573  504676 out.go:252]   - Booting up control plane ...
	I1025 09:16:24.998725  504676 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:16:24.998852  504676 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:16:25.000562  504676 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:16:25.015726  504676 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:16:25.015830  504676 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:16:25.022696  504676 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:16:25.022850  504676 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:16:25.022941  504676 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:16:25.127875  504676 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:16:25.128028  504676 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:16:25.628563  504676 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.875864ms
	I1025 09:16:25.632607  504676 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:16:25.632751  504676 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 09:16:25.632864  504676 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:16:25.632983  504676 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:16:27.991472  504676 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.358759523s
	I1025 09:16:28.352196  504676 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.719656897s
	I1025 09:16:30.134154  504676 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501548739s
	I1025 09:16:30.147076  504676 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:16:30.157946  504676 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:16:30.168962  504676 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:16:30.169286  504676 kubeadm.go:318] [mark-control-plane] Marking the node addons-456159 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:16:30.177523  504676 kubeadm.go:318] [bootstrap-token] Using token: e4e032.cgkaoku9fbqupgym
	I1025 09:16:30.178815  504676 out.go:252]   - Configuring RBAC rules ...
	I1025 09:16:30.178990  504676 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:16:30.182528  504676 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:16:30.188703  504676 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:16:30.191678  504676 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:16:30.194508  504676 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:16:30.197153  504676 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:16:30.541243  504676 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:16:30.958115  504676 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:16:31.541595  504676 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:16:31.542475  504676 kubeadm.go:318] 
	I1025 09:16:31.542595  504676 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:16:31.542608  504676 kubeadm.go:318] 
	I1025 09:16:31.542750  504676 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:16:31.542770  504676 kubeadm.go:318] 
	I1025 09:16:31.542807  504676 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:16:31.542867  504676 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:16:31.542944  504676 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:16:31.542953  504676 kubeadm.go:318] 
	I1025 09:16:31.543032  504676 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:16:31.543047  504676 kubeadm.go:318] 
	I1025 09:16:31.543115  504676 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:16:31.543126  504676 kubeadm.go:318] 
	I1025 09:16:31.543208  504676 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:16:31.543329  504676 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:16:31.543400  504676 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:16:31.543406  504676 kubeadm.go:318] 
	I1025 09:16:31.543476  504676 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:16:31.543556  504676 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:16:31.543565  504676 kubeadm.go:318] 
	I1025 09:16:31.543699  504676 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token e4e032.cgkaoku9fbqupgym \
	I1025 09:16:31.543840  504676 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c90a3b482422c1132c705eb6f8dc3664d0c29dd0e4f154a7770e9ff4c357ad9d \
	I1025 09:16:31.543900  504676 kubeadm.go:318] 	--control-plane 
	I1025 09:16:31.543910  504676 kubeadm.go:318] 
	I1025 09:16:31.544036  504676 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:16:31.544048  504676 kubeadm.go:318] 
	I1025 09:16:31.544153  504676 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token e4e032.cgkaoku9fbqupgym \
	I1025 09:16:31.544247  504676 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c90a3b482422c1132c705eb6f8dc3664d0c29dd0e4f154a7770e9ff4c357ad9d 
	I1025 09:16:31.546415  504676 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 09:16:31.546572  504676 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:16:31.546634  504676 cni.go:84] Creating CNI manager for ""
	I1025 09:16:31.546657  504676 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:16:31.549015  504676 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 09:16:31.550287  504676 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 09:16:31.558977  504676 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 09:16:31.572478  504676 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:16:31.572535  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:31.572596  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-456159 minikube.k8s.io/updated_at=2025_10_25T09_16_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=addons-456159 minikube.k8s.io/primary=true
	I1025 09:16:31.582798  504676 ops.go:34] apiserver oom_adj: -16
	I1025 09:16:31.640343  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:32.141368  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:32.640453  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:33.140576  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:33.640762  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:34.141152  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:34.641245  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:35.140816  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:35.641342  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:36.141364  504676 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:16:36.209982  504676 kubeadm.go:1113] duration metric: took 4.637504295s to wait for elevateKubeSystemPrivileges
	I1025 09:16:36.210023  504676 kubeadm.go:402] duration metric: took 15.430648986s to StartCluster
	I1025 09:16:36.210045  504676 settings.go:142] acquiring lock: {Name:mkcd1be1e8e86a0216701a7ffe40647298894af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:36.210154  504676 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:16:36.211299  504676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/kubeconfig: {Name:mkce2c8734c7bbe9f4385b3c0c646885305b640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:16:36.211630  504676 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 09:16:36.212001  504676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:16:36.211881  504676 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:16:36.212178  504676 config.go:182] Loaded profile config "addons-456159": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:16:36.212270  504676 addons.go:69] Setting ingress-dns=true in profile "addons-456159"
	I1025 09:16:36.212314  504676 addons.go:69] Setting metrics-server=true in profile "addons-456159"
	I1025 09:16:36.212329  504676 addons.go:238] Setting addon ingress-dns=true in "addons-456159"
	I1025 09:16:36.212379  504676 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-456159"
	I1025 09:16:36.212337  504676 addons.go:238] Setting addon metrics-server=true in "addons-456159"
	I1025 09:16:36.212399  504676 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-456159"
	I1025 09:16:36.212415  504676 addons.go:69] Setting registry=true in profile "addons-456159"
	I1025 09:16:36.212428  504676 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-456159"
	I1025 09:16:36.212436  504676 addons.go:238] Setting addon registry=true in "addons-456159"
	I1025 09:16:36.212452  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.212452  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.212463  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.212482  504676 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-456159"
	I1025 09:16:36.212491  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.212512  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.212967  504676 addons.go:69] Setting gcp-auth=true in profile "addons-456159"
	I1025 09:16:36.213005  504676 mustload.go:65] Loading cluster: addons-456159
	I1025 09:16:36.213209  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.213224  504676 config.go:182] Loaded profile config "addons-456159": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:16:36.212347  504676 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-456159"
	I1025 09:16:36.212303  504676 addons.go:69] Setting inspektor-gadget=true in profile "addons-456159"
	I1025 09:16:36.213244  504676 addons.go:69] Setting default-storageclass=true in profile "addons-456159"
	I1025 09:16:36.213254  504676 addons.go:238] Setting addon inspektor-gadget=true in "addons-456159"
	I1025 09:16:36.213251  504676 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-456159"
	I1025 09:16:36.212285  504676 addons.go:69] Setting yakd=true in profile "addons-456159"
	I1025 09:16:36.212421  504676 addons.go:69] Setting cloud-spanner=true in profile "addons-456159"
	I1025 09:16:36.213281  504676 addons.go:238] Setting addon cloud-spanner=true in "addons-456159"
	I1025 09:16:36.213285  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.213287  504676 addons.go:238] Setting addon yakd=true in "addons-456159"
	I1025 09:16:36.213303  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.213324  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.213606  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.214448  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.214675  504676 out.go:179] * Verifying Kubernetes components...
	I1025 09:16:36.214742  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.214831  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.212279  504676 addons.go:69] Setting registry-creds=true in profile "addons-456159"
	I1025 09:16:36.215132  504676 addons.go:238] Setting addon registry-creds=true in "addons-456159"
	I1025 09:16:36.215165  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.213209  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.213230  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.213628  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.219088  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.219771  504676 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:16:36.212376  504676 addons.go:69] Setting volumesnapshots=true in profile "addons-456159"
	I1025 09:16:36.219916  504676 addons.go:238] Setting addon volumesnapshots=true in "addons-456159"
	I1025 09:16:36.219965  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.213233  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.220522  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.213232  504676 addons.go:69] Setting ingress=true in profile "addons-456159"
	I1025 09:16:36.221096  504676 addons.go:238] Setting addon ingress=true in "addons-456159"
	I1025 09:16:36.221152  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.221691  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.213255  504676 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-456159"
	I1025 09:16:36.213260  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.212316  504676 addons.go:69] Setting storage-provisioner=true in profile "addons-456159"
	I1025 09:16:36.224095  504676 addons.go:238] Setting addon storage-provisioner=true in "addons-456159"
	I1025 09:16:36.224147  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.212369  504676 addons.go:69] Setting volcano=true in profile "addons-456159"
	I1025 09:16:36.225734  504676 addons.go:238] Setting addon volcano=true in "addons-456159"
	I1025 09:16:36.225772  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.228051  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.228207  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.228990  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.212375  504676 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-456159"
	I1025 09:16:36.237021  504676 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-456159"
	I1025 09:16:36.237112  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.243574  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.276457  504676 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:16:36.282350  504676 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:16:36.282967  504676 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:16:36.282986  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:16:36.283049  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.283318  504676 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:16:36.285214  504676 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:16:36.285301  504676 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:16:36.285330  504676 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:16:36.285498  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:16:36.285548  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.285608  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.296802  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.300958  504676 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-456159"
	I1025 09:16:36.301046  504676 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:16:36.301051  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.301558  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.304858  504676 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:16:36.304877  504676 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:16:36.305073  504676 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:16:36.305138  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.308708  504676 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:16:36.310891  504676 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:16:36.310973  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.313485  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:16:36.315630  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:16:36.316359  504676 addons.go:238] Setting addon default-storageclass=true in "addons-456159"
	I1025 09:16:36.316407  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:36.318676  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:16:36.319173  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:36.322676  504676 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:16:36.324160  504676 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:16:36.324281  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:16:36.325481  504676 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:16:36.325570  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:16:36.325571  504676 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:16:36.325795  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:16:36.326044  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.329626  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:16:36.330492  504676 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:16:36.330511  504676 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:16:36.330628  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.332731  504676 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:16:36.332755  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:16:36.332786  504676 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:16:36.332820  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.332839  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:16:36.335010  504676 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:16:36.335690  504676 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:16:36.336925  504676 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:16:36.336944  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:16:36.337000  504676 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:16:36.337004  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.337247  504676 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1025 09:16:36.341663  504676 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:16:36.341837  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:16:36.341960  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.340706  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:16:36.344247  504676 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1025 09:16:36.345682  504676 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:16:36.345992  504676 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:16:36.350964  504676 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1025 09:16:36.351044  504676 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:16:36.351061  504676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:16:36.351127  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.351462  504676 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:16:36.351476  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:16:36.351546  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.351751  504676 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:16:36.359320  504676 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:16:36.359349  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:16:36.359425  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.363237  504676 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1025 09:16:36.363352  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1025 09:16:36.363524  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.369709  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.372529  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.376298  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.379027  504676 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:16:36.380363  504676 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:16:36.380680  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.381457  504676 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:16:36.381477  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:16:36.381539  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.408653  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.409300  504676 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 09:16:36.411662  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.417762  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.425129  504676 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:16:36.429759  504676 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:16:36.429846  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:36.430360  504676 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:16:36.427991  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.434572  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.437500  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.440944  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.443269  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	W1025 09:16:36.445906  504676 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:16:36.445953  504676 retry.go:31] will retry after 313.040557ms: ssh: handshake failed: EOF
	I1025 09:16:36.449211  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.460208  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.461186  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.483976  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:36.571953  504676 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:36.571983  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:16:36.603852  504676 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:16:36.603885  504676 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:16:36.608110  504676 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:16:36.608136  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:16:36.609831  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:16:36.611218  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:36.613495  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:16:36.619216  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:16:36.621452  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:16:36.625894  504676 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:16:36.625928  504676 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:16:36.633013  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:16:36.633518  504676 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:16:36.633537  504676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:16:36.642045  504676 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:16:36.642074  504676 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:16:36.647491  504676 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:16:36.647617  504676 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:16:36.650919  504676 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:16:36.650941  504676 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:16:36.665122  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:16:36.671192  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:16:36.674071  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:16:36.681718  504676 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:16:36.681748  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:16:36.688164  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:16:36.702790  504676 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:16:36.702899  504676 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:16:36.708054  504676 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:16:36.708100  504676 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:16:36.710037  504676 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:16:36.710057  504676 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:16:36.715119  504676 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:16:36.715152  504676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:16:36.756481  504676 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:16:36.756512  504676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:16:36.771356  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:16:36.782966  504676 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:16:36.782992  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:16:36.801333  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:16:36.832817  504676 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:16:36.832872  504676 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:16:36.841698  504676 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:16:36.841733  504676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:16:36.883472  504676 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:16:36.883508  504676 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:16:36.885739  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:16:36.888181  504676 node_ready.go:35] waiting up to 6m0s for node "addons-456159" to be "Ready" ...
	I1025 09:16:36.888755  504676 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:16:36.893479  504676 node_ready.go:49] node "addons-456159" is "Ready"
	I1025 09:16:36.893510  504676 node_ready.go:38] duration metric: took 5.28349ms for node "addons-456159" to be "Ready" ...
	I1025 09:16:36.893661  504676 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:16:36.893767  504676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:16:36.961164  504676 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:16:36.961267  504676 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:16:37.003617  504676 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:16:37.003642  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:16:37.093382  504676 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:16:37.093478  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:16:37.131527  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1025 09:16:37.158056  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:16:37.174183  504676 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:16:37.174212  504676 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:16:37.273608  504676 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:16:37.273636  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:16:37.341970  504676 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:16:37.342078  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:16:37.392832  504676 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-456159" context rescaled to 1 replicas
	I1025 09:16:37.419328  504676 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:16:37.419420  504676 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:16:37.501008  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:16:37.887523  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.273991597s)
	I1025 09:16:37.887656  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.276401088s)
	W1025 09:16:37.887700  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:37.887767  504676 retry.go:31] will retry after 304.531421ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:37.887779  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.26629742s)
	I1025 09:16:37.887658  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.268401692s)
	I1025 09:16:38.193451  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:38.676932  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (2.01170901s)
	I1025 09:16:38.677023  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.00573048s)
	I1025 09:16:38.677098  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.003002446s)
	I1025 09:16:38.677131  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.988939475s)
	I1025 09:16:38.677179  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.905734263s)
	I1025 09:16:38.677518  504676 addons.go:479] Verifying addon registry=true in "addons-456159"
	I1025 09:16:38.677250  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.875821718s)
	I1025 09:16:38.677975  504676 addons.go:479] Verifying addon metrics-server=true in "addons-456159"
	I1025 09:16:38.677298  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.791529033s)
	I1025 09:16:38.677280  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.044238837s)
	I1025 09:16:38.678057  504676 addons.go:479] Verifying addon ingress=true in "addons-456159"
	I1025 09:16:38.677324  504676 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.783533518s)
	I1025 09:16:38.678331  504676 api_server.go:72] duration metric: took 2.466661879s to wait for apiserver process to appear ...
	I1025 09:16:38.678358  504676 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:16:38.678388  504676 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:16:38.680675  504676 out.go:179] * Verifying registry addon...
	I1025 09:16:38.680796  504676 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-456159 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:16:38.680874  504676 out.go:179] * Verifying ingress addon...
	I1025 09:16:38.683342  504676 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:16:38.684265  504676 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:16:38.692767  504676 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	W1025 09:16:38.693288  504676 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 09:16:38.693298  504676 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:16:38.693325  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:38.698468  504676 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:16:38.698495  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:38.699615  504676 api_server.go:141] control plane version: v1.34.1
	I1025 09:16:38.700275  504676 api_server.go:131] duration metric: took 21.892464ms to wait for apiserver health ...
	I1025 09:16:38.700426  504676 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:16:38.790107  504676 system_pods.go:59] 15 kube-system pods found
	I1025 09:16:38.790171  504676 system_pods.go:61] "amd-gpu-device-plugin-c9fx7" [531adaeb-818e-44af-a844-595c0764db21] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:16:38.790186  504676 system_pods.go:61] "coredns-66bc5c9577-sqghn" [8909c200-2462-4a69-8b37-a7f6778a18b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:16:38.790195  504676 system_pods.go:61] "coredns-66bc5c9577-w42ld" [460c955d-4324-4d07-84c7-c62ab5b27820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:16:38.790204  504676 system_pods.go:61] "etcd-addons-456159" [2c402ccf-982b-4f32-b857-1cd53e78b05b] Running
	I1025 09:16:38.790210  504676 system_pods.go:61] "kube-apiserver-addons-456159" [b3034060-063b-41b2-8fd1-df11b80cdd8a] Running
	I1025 09:16:38.790215  504676 system_pods.go:61] "kube-controller-manager-addons-456159" [d88de771-9948-42e8-98bd-c417c8775e4d] Running
	I1025 09:16:38.790222  504676 system_pods.go:61] "kube-ingress-dns-minikube" [52d9783e-6092-438f-9f12-d905430c158d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:16:38.790227  504676 system_pods.go:61] "kube-proxy-gdwtx" [82e0f913-7dbe-45b6-a4ee-e15d976d9b55] Running
	I1025 09:16:38.790235  504676 system_pods.go:61] "kube-scheduler-addons-456159" [ad5d4ec1-d629-4404-ba80-7da23e15a35a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:16:38.790242  504676 system_pods.go:61] "metrics-server-85b7d694d7-vt5bt" [05632dd8-d543-4134-9c4c-4ffcab0f110a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:16:38.790250  504676 system_pods.go:61] "nvidia-device-plugin-daemonset-62v67" [ac505d8c-525c-4cdb-b892-7dc4dbd2c0c9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:16:38.790257  504676 system_pods.go:61] "registry-6b586f9694-klmzr" [a487d49f-b5d9-45e5-aaaf-07dd3d13040f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:16:38.790265  504676 system_pods.go:61] "registry-creds-764b6fb674-nrfrt" [76900071-4274-4cf3-8100-c305fefefd59] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:16:38.790274  504676 system_pods.go:61] "registry-proxy-cdxgf" [1d20d86d-607e-4d57-ae6c-05bfab83d2da] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:16:38.790282  504676 system_pods.go:61] "storage-provisioner" [86c66703-f502-4054-a1e6-ea12a76f7e29] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:16:38.790292  504676 system_pods.go:74] duration metric: took 89.826323ms to wait for pod list to return data ...
	I1025 09:16:38.790305  504676 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:16:38.796258  504676 default_sa.go:45] found service account: "default"
	I1025 09:16:38.796291  504676 default_sa.go:55] duration metric: took 5.977763ms for default service account to be created ...
	I1025 09:16:38.796305  504676 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:16:38.871030  504676 system_pods.go:86] 15 kube-system pods found
	I1025 09:16:38.871080  504676 system_pods.go:89] "amd-gpu-device-plugin-c9fx7" [531adaeb-818e-44af-a844-595c0764db21] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1025 09:16:38.871092  504676 system_pods.go:89] "coredns-66bc5c9577-sqghn" [8909c200-2462-4a69-8b37-a7f6778a18b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:16:38.871105  504676 system_pods.go:89] "coredns-66bc5c9577-w42ld" [460c955d-4324-4d07-84c7-c62ab5b27820] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:16:38.871111  504676 system_pods.go:89] "etcd-addons-456159" [2c402ccf-982b-4f32-b857-1cd53e78b05b] Running
	I1025 09:16:38.871117  504676 system_pods.go:89] "kube-apiserver-addons-456159" [b3034060-063b-41b2-8fd1-df11b80cdd8a] Running
	I1025 09:16:38.871123  504676 system_pods.go:89] "kube-controller-manager-addons-456159" [d88de771-9948-42e8-98bd-c417c8775e4d] Running
	I1025 09:16:38.871133  504676 system_pods.go:89] "kube-ingress-dns-minikube" [52d9783e-6092-438f-9f12-d905430c158d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:16:38.871138  504676 system_pods.go:89] "kube-proxy-gdwtx" [82e0f913-7dbe-45b6-a4ee-e15d976d9b55] Running
	I1025 09:16:38.871146  504676 system_pods.go:89] "kube-scheduler-addons-456159" [ad5d4ec1-d629-4404-ba80-7da23e15a35a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:16:38.871154  504676 system_pods.go:89] "metrics-server-85b7d694d7-vt5bt" [05632dd8-d543-4134-9c4c-4ffcab0f110a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:16:38.871164  504676 system_pods.go:89] "nvidia-device-plugin-daemonset-62v67" [ac505d8c-525c-4cdb-b892-7dc4dbd2c0c9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:16:38.871173  504676 system_pods.go:89] "registry-6b586f9694-klmzr" [a487d49f-b5d9-45e5-aaaf-07dd3d13040f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:16:38.871181  504676 system_pods.go:89] "registry-creds-764b6fb674-nrfrt" [76900071-4274-4cf3-8100-c305fefefd59] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:16:38.871189  504676 system_pods.go:89] "registry-proxy-cdxgf" [1d20d86d-607e-4d57-ae6c-05bfab83d2da] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:16:38.871197  504676 system_pods.go:89] "storage-provisioner" [86c66703-f502-4054-a1e6-ea12a76f7e29] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:16:38.871208  504676 system_pods.go:126] duration metric: took 74.894036ms to wait for k8s-apps to be running ...
	I1025 09:16:38.871221  504676 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:16:38.871278  504676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:16:39.196124  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:39.221523  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:39.691439  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:39.691568  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:40.050202  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.892025349s)
	W1025 09:16:40.050257  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:16:40.050285  504676 retry.go:31] will retry after 289.152998ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:16:40.050527  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.549481011s)
	I1025 09:16:40.050551  504676 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-456159"
	I1025 09:16:40.050825  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.919174653s)
	I1025 09:16:40.051217  504676 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.857729551s)
	W1025 09:16:40.051253  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:40.051273  504676 retry.go:31] will retry after 370.80231ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:40.051308  504676 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.180011862s)
	I1025 09:16:40.051323  504676 system_svc.go:56] duration metric: took 1.180101831s WaitForService to wait for kubelet
	I1025 09:16:40.051334  504676 kubeadm.go:586] duration metric: took 3.83966723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:16:40.051356  504676 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:16:40.052483  504676 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:16:40.055517  504676 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:16:40.060856  504676 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 09:16:40.060927  504676 node_conditions.go:123] node cpu capacity is 8
	I1025 09:16:40.060945  504676 node_conditions.go:105] duration metric: took 9.582627ms to run NodePressure ...
	I1025 09:16:40.060961  504676 start.go:241] waiting for startup goroutines ...
	I1025 09:16:40.064723  504676 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:16:40.064752  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:40.193441  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:40.193837  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:40.339633  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:16:40.422810  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:40.560973  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:40.689878  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:40.690107  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:41.059570  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:41.188447  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:41.188609  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1025 09:16:41.233346  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:41.233385  504676 retry.go:31] will retry after 441.521825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:41.560218  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:41.675183  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:41.687377  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:41.687569  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:42.060703  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:42.187929  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:42.188049  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:16:42.412934  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:42.412974  504676 retry.go:31] will retry after 1.0284355s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:42.559778  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:42.690040  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:42.690088  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:43.060455  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:43.187500  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:43.187536  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:43.441817  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:43.559702  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:43.687880  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:43.688187  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:43.706862  504676 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:16:43.706950  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:43.730229  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:43.849342  504676 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:16:43.866937  504676 addons.go:238] Setting addon gcp-auth=true in "addons-456159"
	I1025 09:16:43.867013  504676 host.go:66] Checking if "addons-456159" exists ...
	I1025 09:16:43.867473  504676 cli_runner.go:164] Run: docker container inspect addons-456159 --format={{.State.Status}}
	I1025 09:16:43.891293  504676 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:16:43.891362  504676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-456159
	I1025 09:16:43.914274  504676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/addons-456159/id_rsa Username:docker}
	I1025 09:16:44.060336  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:16:44.145480  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:44.145522  504676 retry.go:31] will retry after 1.849136314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:44.147331  504676 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:16:44.148491  504676 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:16:44.149659  504676 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:16:44.149687  504676 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:16:44.166668  504676 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:16:44.166700  504676 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:16:44.184413  504676 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:16:44.184437  504676 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:16:44.187789  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:44.187966  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:44.201884  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:16:44.558868  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:44.605046  504676 addons.go:479] Verifying addon gcp-auth=true in "addons-456159"
	I1025 09:16:44.606635  504676 out.go:179] * Verifying gcp-auth addon...
	I1025 09:16:44.609893  504676 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:16:44.612479  504676 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:16:44.612503  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:44.686949  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:44.687409  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:45.060654  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:45.117180  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:45.186895  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:45.187000  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:45.559454  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:45.613457  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:45.686461  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:45.686966  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:45.995646  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:46.059718  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:46.113776  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:46.187310  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:46.187712  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:46.618950  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:46.619150  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:46.720535  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:46.720731  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:16:46.953712  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:46.953750  504676 retry.go:31] will retry after 1.118046925s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:47.059436  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:47.113007  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:47.187331  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:47.187417  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:47.558883  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:47.613823  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:47.687031  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:47.687388  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:48.060144  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:48.071959  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:48.113558  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:48.186739  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:48.187492  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:48.559253  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:48.613364  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:48.686825  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:48.687151  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:16:48.779722  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:48.779757  504676 retry.go:31] will retry after 1.705068787s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:49.059383  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:49.160030  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:49.186914  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:49.187182  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:49.559158  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:49.613125  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:49.687328  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:49.687409  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:50.060151  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:50.114034  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:50.187216  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:50.187256  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:50.485514  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:50.559542  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:50.613393  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:50.687801  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:50.687830  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:51.060763  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:51.113443  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:51.186523  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:51.187115  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:16:51.206167  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:51.206206  504676 retry.go:31] will retry after 4.627944714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:51.560291  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:51.613385  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:51.687674  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:51.687851  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:52.059532  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:52.113353  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:52.187365  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:52.187471  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:52.559570  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:52.613621  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:52.687513  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:52.688596  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:53.059477  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:53.113541  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:53.186751  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:53.187237  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:53.560192  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:53.612922  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:53.687110  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:53.687921  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:54.059911  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:54.113991  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:54.187379  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:54.187515  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:54.560627  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:54.613633  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:54.686728  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:54.687367  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:55.059433  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:55.113163  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:55.187327  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:55.187440  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:55.559909  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:55.614117  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:55.687126  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:55.687318  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:55.834351  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:16:56.059572  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:56.113362  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:56.188042  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:56.188222  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:16:56.537195  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:56.537232  504676 retry.go:31] will retry after 5.8152539s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:16:56.559376  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:56.613282  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:56.687445  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:56.687468  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:57.060520  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:57.113536  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:57.186467  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:57.187124  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:57.559364  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:57.613091  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:57.687265  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:57.687369  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:58.059125  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:58.113095  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:58.187443  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:58.187492  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:58.559526  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:58.613555  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:58.687351  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:58.687480  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:59.119266  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:59.119411  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:59.340674  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:59.340732  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:16:59.560433  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:16:59.662206  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:16:59.687417  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:16:59.687795  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:00.085676  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:00.186297  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:00.187294  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:00.187341  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:00.560896  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:00.613331  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:00.689797  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:00.689836  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:01.060227  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:01.112897  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:01.187389  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:01.187450  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:01.559433  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:01.613743  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:01.687522  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:01.687574  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:02.060516  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:02.161385  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:02.187377  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:02.187408  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:02.353551  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:17:02.560024  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:02.660438  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:02.686945  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:02.687889  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:03.059742  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:03.112966  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:17:03.119179  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:17:03.119213  504676 retry.go:31] will retry after 12.369560623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:17:03.188262  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:03.188431  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:03.559627  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:03.661083  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:03.687053  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:03.687246  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:04.059373  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:04.113888  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:04.187297  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:04.187467  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:04.559699  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:04.613489  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:04.687908  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:04.687961  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:05.059828  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:05.114017  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:05.214932  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:05.214998  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:05.559177  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:05.613225  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:05.687519  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:17:05.687670  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:06.091647  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:06.124767  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:06.186789  504676 kapi.go:107] duration metric: took 27.503442463s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:17:06.187677  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:06.559226  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:06.613334  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:06.687435  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:07.060099  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:07.113930  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:07.215513  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:07.559689  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:07.613601  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:07.687851  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:08.059520  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:08.179190  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:08.188149  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:08.560257  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:08.661348  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:08.688810  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:09.060530  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:09.160841  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:09.261933  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:09.560310  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:09.614108  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:09.688751  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:10.059409  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:10.113493  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:10.187461  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:10.561656  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:10.613731  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:10.688449  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:11.061173  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:11.113144  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:11.188672  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:11.560057  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:11.614312  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:11.688799  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:12.092316  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:12.112888  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:12.193807  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:12.559676  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:12.613630  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:12.687825  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:13.059855  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:13.113612  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:13.188028  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:13.561714  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:13.661166  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:13.687547  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:14.059055  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:14.157786  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:14.187692  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:14.560439  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:14.612966  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:14.688391  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:15.059705  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:15.113639  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:15.188039  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:15.489162  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:17:15.560280  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:15.613116  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:15.688439  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:16.060705  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:16.112997  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:17:16.161541  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:17:16.161594  504676 retry.go:31] will retry after 18.488393857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:17:16.188102  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:16.604446  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:16.613040  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:16.688608  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:17.059186  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:17.112961  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:17.221172  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:17.558807  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:17.657572  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:17.687923  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:18.059758  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:18.113573  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:18.187949  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:18.560012  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:18.613883  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:18.688152  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:19.059871  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:19.113612  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:19.187905  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:19.559375  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:19.659238  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:19.688489  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:20.059756  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:20.113593  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:20.188128  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:20.559606  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:20.613550  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:20.687321  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:21.059250  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:21.113128  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:21.188173  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:21.559844  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:21.613840  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:21.688217  504676 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:17:22.059370  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:22.126177  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:22.188310  504676 kapi.go:107] duration metric: took 43.504045896s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:17:22.560371  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:22.613102  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:23.060107  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:23.113419  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:23.583033  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:23.613499  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:24.059732  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:24.113402  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:24.560001  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:24.660799  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:25.059710  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:25.113609  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:25.559236  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:17:25.613821  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:26.059715  504676 kapi.go:107] duration metric: took 46.004197912s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:17:26.113203  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:26.613393  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:27.114093  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:27.613819  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:28.113364  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:28.612802  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:29.113503  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:29.613282  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:30.114231  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:30.614109  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:31.113495  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:31.613082  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:32.113890  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:32.613392  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:33.113131  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:33.613749  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:34.113300  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:34.613458  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:34.650628  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:17:35.113079  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1025 09:17:35.205080  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:17:35.205121  504676 retry.go:31] will retry after 30.442661865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:17:35.613361  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:36.113621  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:36.613437  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:37.113081  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:37.613996  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:38.114252  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:38.613371  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:39.113528  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:39.613953  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:40.113777  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:40.613642  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:41.113408  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:41.613490  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:42.112804  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:42.613303  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:43.114037  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:43.614080  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:44.200544  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:44.612965  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:45.113243  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:45.613974  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:46.113304  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:46.613557  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:47.113499  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:47.613246  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:48.113897  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:48.613178  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:49.114053  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:49.613999  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:50.114101  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:50.613790  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:51.114103  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:51.679574  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:52.113385  504676 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:17:52.614389  504676 kapi.go:107] duration metric: took 1m8.004493564s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:17:52.616510  504676 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-456159 cluster.
	I1025 09:17:52.618013  504676 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:17:52.619413  504676 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:18:05.648082  504676 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1025 09:18:06.220108  504676 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:18:06.220259  504676 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:18:06.223230  504676 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, registry-creds, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volcano, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1025 09:18:06.224452  504676 addons.go:514] duration metric: took 1m30.012593262s for enable addons: enabled=[nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner registry-creds amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volcano volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1025 09:18:06.224509  504676 start.go:246] waiting for cluster config update ...
	I1025 09:18:06.224536  504676 start.go:255] writing updated cluster config ...
	I1025 09:18:06.224846  504676 ssh_runner.go:195] Run: rm -f paused
	I1025 09:18:06.228995  504676 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:18:06.233227  504676 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w42ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.237480  504676 pod_ready.go:94] pod "coredns-66bc5c9577-w42ld" is "Ready"
	I1025 09:18:06.237503  504676 pod_ready.go:86] duration metric: took 4.252998ms for pod "coredns-66bc5c9577-w42ld" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.239511  504676 pod_ready.go:83] waiting for pod "etcd-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.243412  504676 pod_ready.go:94] pod "etcd-addons-456159" is "Ready"
	I1025 09:18:06.243437  504676 pod_ready.go:86] duration metric: took 3.903208ms for pod "etcd-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.245476  504676 pod_ready.go:83] waiting for pod "kube-apiserver-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.249146  504676 pod_ready.go:94] pod "kube-apiserver-addons-456159" is "Ready"
	I1025 09:18:06.249171  504676 pod_ready.go:86] duration metric: took 3.67244ms for pod "kube-apiserver-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.251038  504676 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.633495  504676 pod_ready.go:94] pod "kube-controller-manager-addons-456159" is "Ready"
	I1025 09:18:06.633526  504676 pod_ready.go:86] duration metric: took 382.467447ms for pod "kube-controller-manager-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:06.832524  504676 pod_ready.go:83] waiting for pod "kube-proxy-gdwtx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:07.232914  504676 pod_ready.go:94] pod "kube-proxy-gdwtx" is "Ready"
	I1025 09:18:07.232941  504676 pod_ready.go:86] duration metric: took 400.386041ms for pod "kube-proxy-gdwtx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:07.432823  504676 pod_ready.go:83] waiting for pod "kube-scheduler-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:07.832968  504676 pod_ready.go:94] pod "kube-scheduler-addons-456159" is "Ready"
	I1025 09:18:07.832998  504676 pod_ready.go:86] duration metric: took 400.151386ms for pod "kube-scheduler-addons-456159" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:18:07.833009  504676 pod_ready.go:40] duration metric: took 1.603984816s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:18:07.879495  504676 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 09:18:07.881338  504676 out.go:179] * Done! kubectl is now configured to use "addons-456159" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 25 09:19:47 addons-456159 cri-dockerd[1356]: time="2025-10-25T09:19:47Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Oct 25 09:19:53 addons-456159 dockerd[1047]: time="2025-10-25T09:19:53.777879679Z" level=info msg="ignoring event" container=ba4dbe8acd67d6a7c1e3d534a43810cccbbe81f6f974a12f31572a43d9ed4bc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:53 addons-456159 dockerd[1047]: time="2025-10-25T09:19:53.923120521Z" level=info msg="ignoring event" container=321cd95ea4993f72699008ab7d0f12cea0fa9d5cd58931c56cf63be78a817fd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:55 addons-456159 dockerd[1047]: time="2025-10-25T09:19:55.397466099Z" level=info msg="ignoring event" container=5d96e69089b65772d55031b20e2867f7866118d9fed83baf9234be8d46a84bf2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:55 addons-456159 dockerd[1047]: time="2025-10-25T09:19:55.398499082Z" level=info msg="ignoring event" container=484faf76cf04f398aa104949d434606b32f92ff988a0e7510c0bb9de8a3c61f5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:55 addons-456159 cri-dockerd[1356]: time="2025-10-25T09:19:55Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"snapshot-controller-7d9fbc56b8-x8qs5_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Oct 25 09:19:55 addons-456159 dockerd[1047]: time="2025-10-25T09:19:55.564211447Z" level=info msg="ignoring event" container=ad7b82431541d96a4cdc6e005b719a86add788fcfe1a3face744b8e6e8e70bef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:55 addons-456159 dockerd[1047]: time="2025-10-25T09:19:55.581118568Z" level=info msg="ignoring event" container=746143d75ea403d0b48ff74eecb94c1229087ae21f40d840b5048e0ebb96e8c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.111495624Z" level=info msg="ignoring event" container=1b705b0011204699c3c95aedd02a3e72dd2e7b0937b47e835d79e3019a4a5beb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.116939325Z" level=info msg="ignoring event" container=69b89a2cd48010639cb681ab2d6e64e6743ffcfd4310a9caa6acc4ba16f6e5eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.117837044Z" level=info msg="ignoring event" container=0a0c160ddf3e694f8a6f17126705bd7148a56ac478c259fc53cabe1844ed7029 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.127458862Z" level=info msg="ignoring event" container=45d24f9fe65e180b711726395d8b3e18233a5d08b493d8d171b30cb813ff6b46 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.127497033Z" level=info msg="ignoring event" container=505acd137f4f5f78785743810e6280fac227b21f0727e17935c8e78c63818b61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.127524354Z" level=info msg="ignoring event" container=2702ad8202cdf8ff760592e53024b945e188a479451a55dbe8af9799d87f624b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.127539707Z" level=info msg="ignoring event" container=8066b0868660731b84d9b615abbbba838b462dd896b5ca45dc23d9eb6a9e7932 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.144981471Z" level=info msg="ignoring event" container=9fc36da55b919b042957ac738c19cc182041ea4b0369139060fd9074b4bd62fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.281276684Z" level=info msg="ignoring event" container=086a2a6ed191dbdd76cf6acd553152c8b2ddfb28e8dca63a1fa679b996eb57ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.312290985Z" level=info msg="ignoring event" container=65bc99f1a915e9a303718a498b36387adf0af8d1fed0bfb24ed6aba8aa637c67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:19:56 addons-456159 dockerd[1047]: time="2025-10-25T09:19:56.329267966Z" level=info msg="ignoring event" container=398cfe5552e41d64b19567b4af2fa9e599f5c1eeb30257a930e8e35fbbc28aaf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 09:20:05 addons-456159 dockerd[1047]: time="2025-10-25T09:20:05.883677401Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:20:57 addons-456159 dockerd[1047]: time="2025-10-25T09:20:57.942809278Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:20:57 addons-456159 cri-dockerd[1356]: time="2025-10-25T09:20:57Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 25 09:22:27 addons-456159 dockerd[1047]: time="2025-10-25T09:22:27.903405946Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:25:10 addons-456159 dockerd[1047]: time="2025-10-25T09:25:10.959982694Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:25:10 addons-456159 cri-dockerd[1356]: time="2025-10-25T09:25:10Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	76b3406b8b4ba       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   5c66bd2e11881       busybox                                     default
	6ca085ecb6641       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             10 minutes ago      Running             controller                0                   6a1111ccb189f       ingress-nginx-controller-675c5ddd98-nj85m   ingress-nginx
	d45405c30f9c4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   10 minutes ago      Exited              patch                     0                   f71d6cb121936       ingress-nginx-admission-patch-ghljt         ingress-nginx
	1ef9d798a773b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   10 minutes ago      Exited              create                    0                   521b239d0e608       ingress-nginx-admission-create-hrtd9        ingress-nginx
	64e3800eeb232       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:df0516c4c988694d65b19400d0990f129d5fd68f211cc826e7fdad55140626fd            10 minutes ago      Running             gadget                    0                   fdc37b14fa1c1       gadget-lr8sb                                gadget
	a7e4db4cad131       kicbase/minikube-ingress-dns@sha256:d7c3fd25a0ea8fa62d4096eda202b3fc69d994b01ed6ab431def629f16ba1a89                         10 minutes ago      Running             minikube-ingress-dns      0                   b01c077bd3f09       kube-ingress-dns-minikube                   kube-system
	55fcf67d9d68c       6e38f40d628db                                                                                                                10 minutes ago      Running             storage-provisioner       0                   c5a73a9c26ee7       storage-provisioner                         kube-system
	a0fb3cbe2ff7a       52546a367cc9e                                                                                                                10 minutes ago      Running             coredns                   0                   39a00b6bbbca8       coredns-66bc5c9577-w42ld                    kube-system
	a724bb230072d       fc25172553d79                                                                                                                10 minutes ago      Running             kube-proxy                0                   06121341815ef       kube-proxy-gdwtx                            kube-system
	7635ed3bd3109       7dd6aaa1717ab                                                                                                                11 minutes ago      Running             kube-scheduler            0                   af57737a20356       kube-scheduler-addons-456159                kube-system
	46c407204b296       5f1f5298c888d                                                                                                                11 minutes ago      Running             etcd                      0                   b79c2d6a8f7b8       etcd-addons-456159                          kube-system
	be26a450ba1b3       c80c8dbafe7dd                                                                                                                11 minutes ago      Running             kube-controller-manager   0                   fae024298f204       kube-controller-manager-addons-456159       kube-system
	88aef050d793d       c3994bc696102                                                                                                                11 minutes ago      Running             kube-apiserver            0                   a0e884fe3d9e6       kube-apiserver-addons-456159                kube-system
	
	
	==> controller_ingress [6ca085ecb664] <==
	I1025 09:17:22.669729       8 nginx.go:339] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I1025 09:17:22.670263       8 controller.go:214] "Configuration changes detected, backend reload required"
	I1025 09:17:22.677543       8 leaderelection.go:271] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I1025 09:17:22.677636       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-675c5ddd98-nj85m"
	I1025 09:17:22.681571       8 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-nj85m" node="addons-456159"
	I1025 09:17:22.726853       8 controller.go:228] "Backend successfully reloaded"
	I1025 09:17:22.726937       8 controller.go:240] "Initial sync, sleeping for 1 second"
	I1025 09:17:22.726976       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-nj85m", UID:"315c3f5e-d4e1-407d-8478-8e79e9df29b2", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I1025 09:17:22.783015       8 status.go:224] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-nj85m" node="addons-456159"
	W1025 09:19:26.528446       8 controller.go:1126] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I1025 09:19:26.529606       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I1025 09:19:26.532604       8 store.go:443] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	I1025 09:19:26.532878       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"29a338b2-0a49-4f56-ac47-dc66a8d829f8", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1025 09:19:29.511105       8 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1025 09:19:29.511785       8 controller.go:214] "Configuration changes detected, backend reload required"
	I1025 09:19:29.559382       8 controller.go:228] "Backend successfully reloaded"
	I1025 09:19:29.559651       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-675c5ddd98-nj85m", UID:"315c3f5e-d4e1-407d-8478-8e79e9df29b2", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W1025 09:19:32.843804       8 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1025 09:19:38.654878       8 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1025 09:19:44.129408       8 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1025 09:19:56.029689       8 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	W1025 09:19:59.363248       8 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	I1025 09:20:22.685958       8 status.go:309] "updating Ingress status" namespace="default" ingress="nginx-ingress" currentValue=null newValue=[{"ip":"192.168.49.2"}]
	I1025 09:20:22.690064       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"29a338b2-0a49-4f56-ac47-dc66a8d829f8", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2334", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	W1025 09:20:22.690147       8 controller.go:1232] Service "default/nginx" does not have any active Endpoint.
	
	
	==> coredns [a0fb3cbe2ff7] <==
	[INFO] 10.244.0.9:46871 - 49544 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000176853s
	[INFO] 10.244.0.9:57888 - 34855 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000092885s
	[INFO] 10.244.0.9:57888 - 34578 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000138967s
	[INFO] 10.244.0.9:55853 - 13143 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000088484s
	[INFO] 10.244.0.9:55853 - 12858 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000129577s
	[INFO] 10.244.0.9:43080 - 51981 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000139724s
	[INFO] 10.244.0.9:43080 - 51729 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164617s
	[INFO] 10.244.0.27:54221 - 35662 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000317356s
	[INFO] 10.244.0.27:59979 - 37026 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000432854s
	[INFO] 10.244.0.27:54255 - 55112 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000182038s
	[INFO] 10.244.0.27:55529 - 53662 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000228516s
	[INFO] 10.244.0.27:54991 - 38271 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000111026s
	[INFO] 10.244.0.27:49257 - 2208 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013786s
	[INFO] 10.244.0.27:53658 - 62981 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005481165s
	[INFO] 10.244.0.27:47570 - 8275 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005963514s
	[INFO] 10.244.0.27:43959 - 22147 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00682984s
	[INFO] 10.244.0.27:47121 - 44769 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008772932s
	[INFO] 10.244.0.27:48193 - 47034 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00634122s
	[INFO] 10.244.0.27:59385 - 10843 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00739742s
	[INFO] 10.244.0.27:49638 - 49986 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004206668s
	[INFO] 10.244.0.27:38321 - 33618 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00455154s
	[INFO] 10.244.0.27:37159 - 43557 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001005619s
	[INFO] 10.244.0.27:38051 - 29454 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002296778s
	[INFO] 10.244.0.34:58782 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000376592s
	[INFO] 10.244.0.34:42522 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153475s
	
	
	==> describe nodes <==
	Name:               addons-456159
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-456159
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=addons-456159
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_16_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-456159
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:16:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-456159
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:27:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:26:23 +0000   Sat, 25 Oct 2025 09:16:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:26:23 +0000   Sat, 25 Oct 2025 09:16:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:26:23 +0000   Sat, 25 Oct 2025 09:16:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:26:23 +0000   Sat, 25 Oct 2025 09:16:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-456159
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6b446b76-1b25-4392-8a19-78c51aa98ac3
	  Boot ID:                    2fda8ac7-743b-4d90-8011-17dbcec8d3ad
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  gadget                      gadget-lr8sb                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-nj85m    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-w42ld                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-addons-456159                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-456159                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-456159        200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-gdwtx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-456159                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x9 over 11m)  kubelet          Node addons-456159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node addons-456159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-456159 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-456159 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-456159 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-456159 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-456159 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-456159 event: Registered Node addons-456159 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 c5 76 d2 5f 12 08 06
	[  +0.023946] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 70 66 eb fd 75 08 06
	[  +0.049026] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce 2d 80 ec a5 47 08 06
	[  +3.694996] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1e b6 c2 b4 f4 52 08 06
	[  +1.725251] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 35 d3 1d 1a 69 08 06
	[  +0.384426] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 a9 47 60 b0 27 08 06
	[  +0.023599] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 c3 01 22 23 63 08 06
	[ +20.439263] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 06 8b 23 bb 13 08 06
	[  +7.760078] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a b7 37 cd 89 06 08 06
	[  +0.000495] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[Oct25 09:19] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff ca 69 af 92 55 8d 08 06
	[  +0.000498] IPv4: martian source 10.244.0.34 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[  +0.000602] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 af 9c e7 7c 87 08 06
	
	
	==> etcd [46c407204b29] <==
	{"level":"warn","ts":"2025-10-25T09:16:27.941668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:16:40.671244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:16:40.679775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:16:46.616574Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.211439ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/gadget/gadget\" limit:1 ","response":"range_response_count:1 size:589"}
	{"level":"info","ts":"2025-10-25T09:16:46.616681Z","caller":"traceutil/trace.go:172","msg":"trace[709438926] range","detail":"{range_begin:/registry/serviceaccounts/gadget/gadget; range_end:; response_count:1; response_revision:1022; }","duration":"143.337919ms","start":"2025-10-25T09:16:46.473325Z","end":"2025-10-25T09:16:46.616663Z","steps":["trace[709438926] 'agreement among raft nodes before linearized reading'  (duration: 32.835655ms)","trace[709438926] 'range keys from in-memory index tree'  (duration: 110.272621ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:16:46.617178Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.36345ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040866769516866 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/gcp-auth/gcp-auth-78565c9fb4-mmnm5.1871b13cffae8827\" mod_revision:1021 > success:<request_put:<key:\"/registry/events/gcp-auth/gcp-auth-78565c9fb4-mmnm5.1871b13cffae8827\" value_size:684 lease:8128040866769515156 >> failure:<request_range:<key:\"/registry/events/gcp-auth/gcp-auth-78565c9fb4-mmnm5.1871b13cffae8827\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T09:16:46.617276Z","caller":"traceutil/trace.go:172","msg":"trace[43856418] transaction","detail":"{read_only:false; response_revision:1023; number_of_response:1; }","duration":"237.799881ms","start":"2025-10-25T09:16:46.379457Z","end":"2025-10-25T09:16:46.617257Z","steps":["trace[43856418] 'process raft request'  (duration: 126.729115ms)","trace[43856418] 'compare'  (duration: 110.292299ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-25T09:16:59.338890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"153.455807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:16:59.339085Z","caller":"traceutil/trace.go:172","msg":"trace[746787740] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"153.659994ms","start":"2025-10-25T09:16:59.185408Z","end":"2025-10-25T09:16:59.339068Z","steps":["trace[746787740] 'range keys from in-memory index tree'  (duration: 153.379323ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:16:59.338893Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.737115ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-25T09:16:59.339193Z","caller":"traceutil/trace.go:172","msg":"trace[196327442] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1069; }","duration":"153.033306ms","start":"2025-10-25T09:16:59.186142Z","end":"2025-10-25T09:16:59.339175Z","steps":["trace[196327442] 'range keys from in-memory index tree'  (duration: 152.691893ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T09:17:05.346199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.353132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.412486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.422353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.432143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.457056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.509341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.520657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.527073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.549297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:17:05.561472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46326","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:26:27.443067Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2412}
	{"level":"info","ts":"2025-10-25T09:26:27.667390Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2412,"took":"223.498979ms","hash":3585795503,"current-db-size-bytes":10117120,"current-db-size":"10 MB","current-db-size-in-use-bytes":2506752,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2025-10-25T09:26:27.667435Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3585795503,"revision":2412,"compact-revision":-1}
	
	
	==> kernel <==
	 09:27:28 up  1:09,  0 user,  load average: 0.12, 0.72, 1.87
	Linux addons-456159 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [88aef050d793] <==
	W1025 09:18:37.328128       1 cacher.go:182] Terminating all watchers from cacher cronjobs.batch.volcano.sh
	W1025 09:18:37.604219       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1025 09:18:37.701706       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1025 09:18:56.358302       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54734: use of closed network connection
	E1025 09:18:56.560055       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54754: use of closed network connection
	I1025 09:19:06.162437       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.128.58"}
	I1025 09:19:26.530333       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 09:19:26.714147       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.29.201"}
	E1025 09:19:30.349752       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1025 09:19:34.327717       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1025 09:19:55.259931       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:19:55.259982       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:19:55.273402       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:19:55.273451       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:19:55.278455       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:19:55.278502       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:19:55.296446       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:19:55.296493       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1025 09:19:55.307547       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1025 09:19:55.307630       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1025 09:19:56.273849       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1025 09:19:56.307785       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1025 09:19:56.343223       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1025 09:20:06.103228       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1025 09:26:28.322508       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [be26a450ba1b] <==
	E1025 09:26:34.936322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:35.261884       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:35.263120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:36.227093       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:36.228115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:37.170469       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:37.171564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:40.497198       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:40.498415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:45.583186       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:45.584228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:46.435023       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:46.436052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:59.178617       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:59.179756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:26:59.735822       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:26:59.737085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:27:01.591482       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:27:01.592565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:27:14.073395       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:27:14.074366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:27:17.079067       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:27:17.080119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1025 09:27:26.117914       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1025 09:27:26.119053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [a724bb230072] <==
	I1025 09:16:37.079660       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:16:37.206189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:16:37.312797       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:16:37.312864       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:16:37.313055       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:16:37.359282       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:16:37.359347       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:16:37.368453       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:16:37.370841       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:16:37.371832       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:16:37.393638       1 config.go:200] "Starting service config controller"
	I1025 09:16:37.403494       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:16:37.396119       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:16:37.403709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:16:37.396831       1 config.go:309] "Starting node config controller"
	I1025 09:16:37.403810       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:16:37.403818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:16:37.404001       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:16:37.404342       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:16:37.404571       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:16:37.405498       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:16:37.505289       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7635ed3bd310] <==
	E1025 09:16:28.350340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:16:28.350443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:16:28.350444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:16:28.350522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:16:28.350539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:16:28.350557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:16:28.350677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:16:28.350764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:16:28.350702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:16:28.350757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:16:28.350815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 09:16:28.350905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:16:28.350931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:16:28.350948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:16:28.350995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:16:29.252032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:16:29.315318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:16:29.330630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 09:16:29.340806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:16:29.439541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:16:29.481862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:16:29.544961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:16:29.570238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:16:29.592403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1025 09:16:31.648383       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:23:27 addons-456159 kubelet[2240]: E1025 09:23:27.784629    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:23:42 addons-456159 kubelet[2240]: E1025 09:23:42.785008    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:23:51 addons-456159 kubelet[2240]: I1025 09:23:51.782148    2240 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:23:55 addons-456159 kubelet[2240]: E1025 09:23:55.784768    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:24:06 addons-456159 kubelet[2240]: E1025 09:24:06.784711    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:24:17 addons-456159 kubelet[2240]: E1025 09:24:17.784934    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:24:29 addons-456159 kubelet[2240]: E1025 09:24:29.784267    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:24:44 addons-456159 kubelet[2240]: E1025 09:24:44.784735    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:24:57 addons-456159 kubelet[2240]: E1025 09:24:57.784354    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:25:10 addons-456159 kubelet[2240]: E1025 09:25:10.962723    2240 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 25 09:25:10 addons-456159 kubelet[2240]: E1025 09:25:10.962780    2240 kuberuntime_image.go:43] "Failed to pull image" err="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 25 09:25:10 addons-456159 kubelet[2240]: E1025 09:25:10.962875    2240 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(214873df-6ea5-49a2-84da-134b3e4e1ab7): ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:25:10 addons-456159 kubelet[2240]: E1025 09:25:10.962903    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:25:19 addons-456159 kubelet[2240]: I1025 09:25:19.783032    2240 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:25:22 addons-456159 kubelet[2240]: E1025 09:25:22.785206    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:25:33 addons-456159 kubelet[2240]: E1025 09:25:33.784388    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:25:45 addons-456159 kubelet[2240]: E1025 09:25:45.784907    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:25:59 addons-456159 kubelet[2240]: E1025 09:25:59.784899    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:26:10 addons-456159 kubelet[2240]: E1025 09:26:10.785030    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:26:24 addons-456159 kubelet[2240]: E1025 09:26:24.792458    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:26:35 addons-456159 kubelet[2240]: E1025 09:26:35.784647    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:26:49 addons-456159 kubelet[2240]: I1025 09:26:49.782864    2240 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:26:49 addons-456159 kubelet[2240]: E1025 09:26:49.785154    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:27:03 addons-456159 kubelet[2240]: E1025 09:27:03.784257    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	Oct 25 09:27:17 addons-456159 kubelet[2240]: E1025 09:27:17.784993    2240 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="214873df-6ea5-49a2-84da-134b3e4e1ab7"
	
	
	==> storage-provisioner [55fcf67d9d68] <==
	W1025 09:27:03.781171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:05.784976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:05.790353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:07.793758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:07.798177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:09.801563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:09.805632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:11.809529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:11.815223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:13.818754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:13.822511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:15.825912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:15.830089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:17.833472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:17.837190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:19.840682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:19.845491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:21.848772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:21.852750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:23.856227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:23.861296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:25.864019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:25.869132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:27.871956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:27:27.875780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-456159 -n addons-456159
helpers_test.go:269: (dbg) Run:  kubectl --context addons-456159 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx ingress-nginx-admission-create-hrtd9 ingress-nginx-admission-patch-ghljt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-456159 describe pod nginx ingress-nginx-admission-create-hrtd9 ingress-nginx-admission-patch-ghljt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-456159 describe pod nginx ingress-nginx-admission-create-hrtd9 ingress-nginx-admission-patch-ghljt: exit status 1 (71.697817ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-456159/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:19:26 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.36
	IPs:
	  IP:  10.244.0.36
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lcgwn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lcgwn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m3s                  default-scheduler  Successfully assigned default/nginx to addons-456159
	  Warning  Failed     6m32s (x2 over 8m2s)  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m2s (x5 over 8m2s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m2s (x5 over 8m2s)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x3 over 7m51s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    3m (x21 over 8m1s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m (x21 over 8m1s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hrtd9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ghljt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-456159 describe pod nginx ingress-nginx-admission-create-hrtd9 ingress-nginx-admission-patch-ghljt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-456159 addons disable ingress --alsologtostderr -v=1: (7.653912903s)
--- FAIL: TestAddons/parallel/Ingress (491.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-013051 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-013051 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-013051 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-013051 --alsologtostderr -v=1] stderr:
I1025 09:31:49.465849  564255 out.go:360] Setting OutFile to fd 1 ...
I1025 09:31:49.466131  564255 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:31:49.466142  564255 out.go:374] Setting ErrFile to fd 2...
I1025 09:31:49.466146  564255 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:31:49.466340  564255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
I1025 09:31:49.466623  564255 mustload.go:65] Loading cluster: functional-013051
I1025 09:31:49.467044  564255 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:31:49.467623  564255 cli_runner.go:164] Run: docker container inspect functional-013051 --format={{.State.Status}}
I1025 09:31:49.488367  564255 host.go:66] Checking if "functional-013051" exists ...
I1025 09:31:49.488683  564255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1025 09:31:49.554256  564255 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:31:49.54093607 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1025 09:31:49.554426  564255 api_server.go:166] Checking apiserver status ...
I1025 09:31:49.554489  564255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1025 09:31:49.554535  564255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013051
I1025 09:31:49.575923  564255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/functional-013051/id_rsa Username:docker}
I1025 09:31:49.683795  564255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9562/cgroup
W1025 09:31:49.693153  564255 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9562/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1025 09:31:49.693206  564255 ssh_runner.go:195] Run: ls
I1025 09:31:49.697288  564255 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1025 09:31:49.701776  564255 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1025 09:31:49.701831  564255 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1025 09:31:49.701991  564255 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:31:49.702001  564255 addons.go:69] Setting dashboard=true in profile "functional-013051"
I1025 09:31:49.702012  564255 addons.go:238] Setting addon dashboard=true in "functional-013051"
I1025 09:31:49.702039  564255 host.go:66] Checking if "functional-013051" exists ...
I1025 09:31:49.702401  564255 cli_runner.go:164] Run: docker container inspect functional-013051 --format={{.State.Status}}
I1025 09:31:49.722743  564255 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1025 09:31:49.724061  564255 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1025 09:31:49.725362  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1025 09:31:49.725406  564255 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1025 09:31:49.725486  564255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013051
I1025 09:31:49.744025  564255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/functional-013051/id_rsa Username:docker}
I1025 09:31:49.852318  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1025 09:31:49.852344  564255 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1025 09:31:49.866752  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1025 09:31:49.866778  564255 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1025 09:31:49.880327  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1025 09:31:49.880381  564255 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1025 09:31:49.894306  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1025 09:31:49.894344  564255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1025 09:31:49.908302  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1025 09:31:49.908331  564255 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1025 09:31:49.922298  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1025 09:31:49.922333  564255 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1025 09:31:49.935873  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1025 09:31:49.935898  564255 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1025 09:31:49.949177  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1025 09:31:49.949201  564255 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1025 09:31:49.962557  564255 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1025 09:31:49.962594  564255 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1025 09:31:49.976753  564255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1025 09:31:50.445368  564255 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-013051 addons enable metrics-server

                                                
                                                
I1025 09:31:50.446314  564255 addons.go:201] Writing out "functional-013051" config to set dashboard=true...
W1025 09:31:50.446526  564255 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1025 09:31:50.447173  564255 kapi.go:59] client config for functional-013051: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.key", CAFile:"/home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1025 09:31:50.447604  564255 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1025 09:31:50.447625  564255 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1025 09:31:50.447633  564255 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1025 09:31:50.447638  564255 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1025 09:31:50.447642  564255 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1025 09:31:50.455112  564255 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  3064db07-1914-4e12-b1c0-50d97083c6cd 865 0 2025-10-25 09:31:50 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-25 09:31:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.100.97.206,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.100.97.206],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1025 09:31:50.455245  564255 out.go:285] * Launching proxy ...
* Launching proxy ...
I1025 09:31:50.455304  564255 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-013051 proxy --port 36195]
I1025 09:31:50.455552  564255 dashboard.go:157] Waiting for kubectl to output host:port ...
I1025 09:31:50.503377  564255 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1025 09:31:50.503435  564255 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1025 09:31:50.512292  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1da4caf1-f918-4d0b-9136-ace95e9706cc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000250ac0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001a1540 TLS:<nil>}
I1025 09:31:50.512370  564255 retry.go:31] will retry after 61.873µs: Temporary Error: unexpected response code: 503
I1025 09:31:50.515807  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[be370d71-2604-4b05-89e1-c68dcacd1241] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc0003f51c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001a1680 TLS:<nil>}
I1025 09:31:50.515871  564255 retry.go:31] will retry after 218.48µs: Temporary Error: unexpected response code: 503
I1025 09:31:50.519211  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b4d1c621-167d-4eeb-8dc6-82d4617d7a91] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000250bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2000 TLS:<nil>}
I1025 09:31:50.519269  564255 retry.go:31] will retry after 229.123µs: Temporary Error: unexpected response code: 503
I1025 09:31:50.522716  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fea7d827-e198-4848-8c9d-2a98b7ea541b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000b90a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001a17c0 TLS:<nil>}
I1025 09:31:50.522774  564255 retry.go:31] will retry after 322.812µs: Temporary Error: unexpected response code: 503
I1025 09:31:50.526079  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[418a8d99-5fa9-4042-8f55-efe52bbb601f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000251000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207400 TLS:<nil>}
I1025 09:31:50.526125  564255 retry.go:31] will retry after 323.441µs: Temporary Error: unexpected response code: 503
I1025 09:31:50.529495  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d7f82a0b-7b56-4be3-be0e-bbf572d63de6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000251300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001a1900 TLS:<nil>}
I1025 09:31:50.529545  564255 retry.go:31] will retry after 1.135329ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.533873  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[95af8ddf-7227-472e-a105-0fb2a2758e0d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000b90b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001a1a40 TLS:<nil>}
I1025 09:31:50.533926  564255 retry.go:31] will retry after 876.048µs: Temporary Error: unexpected response code: 503
I1025 09:31:50.537152  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6fd946cf-b3e3-4a8a-81fd-4d8dbfb562cd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc0003f5300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207540 TLS:<nil>}
I1025 09:31:50.537195  564255 retry.go:31] will retry after 1.301464ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.541733  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35344f33-9816-43df-b65a-4d62d8699483] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000251d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2140 TLS:<nil>}
I1025 09:31:50.541793  564255 retry.go:31] will retry after 1.9215ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.546262  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[539fc855-702f-47a1-87e7-6ee364652d07] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000b90c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2000 TLS:<nil>}
I1025 09:31:50.546313  564255 retry.go:31] will retry after 4.047895ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.552788  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[098f00c2-7309-4cbc-b293-67373e36c030] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc00012a5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207680 TLS:<nil>}
I1025 09:31:50.552855  564255 retry.go:31] will retry after 7.951925ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.563553  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[47149eb0-6594-47ca-a47e-beed006fa25a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc0003f5440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2140 TLS:<nil>}
I1025 09:31:50.563626  564255 retry.go:31] will retry after 12.148653ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.580051  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1ea07915-5aa6-4db8-a5be-998f4add0774] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc0003f5500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2280 TLS:<nil>}
I1025 09:31:50.580150  564255 retry.go:31] will retry after 16.644294ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.600523  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fd2f7feb-1d83-4718-9bcb-fe18b0fe1efb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc0003f5600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d23c0 TLS:<nil>}
I1025 09:31:50.600612  564255 retry.go:31] will retry after 22.491037ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.627172  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf902446-e033-4bb4-9784-6460ac229ff5] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc00012b840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2640 TLS:<nil>}
I1025 09:31:50.627272  564255 retry.go:31] will retry after 20.202978ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.651624  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[095cec7c-1583-4355-aaf6-e90e58f0c26d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000b90d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2280 TLS:<nil>}
I1025 09:31:50.651691  564255 retry.go:31] will retry after 32.447961ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.687800  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[78b195b8-1f74-4cb5-a488-93b3e5cb17da] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc0007c60c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002077c0 TLS:<nil>}
I1025 09:31:50.687865  564255 retry.go:31] will retry after 67.434214ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.759017  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2da87537-12fb-4418-bde6-1cd339679aae] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc000b90e80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a23c0 TLS:<nil>}
I1025 09:31:50.759092  564255 retry.go:31] will retry after 122.581949ms: Temporary Error: unexpected response code: 503
I1025 09:31:50.886393  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac1b073a-b00e-4140-ab36-4080a022861d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:50 GMT]] Body:0xc0007c61c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207900 TLS:<nil>}
I1025 09:31:50.886476  564255 retry.go:31] will retry after 218.855766ms: Temporary Error: unexpected response code: 503
I1025 09:31:51.108798  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[83e9b183-8e17-46db-ba5b-0ee1d6a7f4f7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:51 GMT]] Body:0xc0003f5740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2500 TLS:<nil>}
I1025 09:31:51.108860  564255 retry.go:31] will retry after 174.928911ms: Temporary Error: unexpected response code: 503
I1025 09:31:51.288030  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d8a139de-61cc-49ed-8334-38c397a532a1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:51 GMT]] Body:0xc0003f5800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2780 TLS:<nil>}
I1025 09:31:51.288109  564255 retry.go:31] will retry after 360.326347ms: Temporary Error: unexpected response code: 503
I1025 09:31:51.652189  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[067b5930-7094-46d7-a9af-2506b34eb156] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:51 GMT]] Body:0xc0007c62c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2a00 TLS:<nil>}
I1025 09:31:51.652256  564255 retry.go:31] will retry after 642.604053ms: Temporary Error: unexpected response code: 503
I1025 09:31:52.298271  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[624dec57-2abe-49ea-8659-2b8de6b63612] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:52 GMT]] Body:0xc0003f5ac0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2640 TLS:<nil>}
I1025 09:31:52.298358  564255 retry.go:31] will retry after 487.886841ms: Temporary Error: unexpected response code: 503
I1025 09:31:52.790340  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1025d3a5-d538-4603-b624-951e3d322534] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:52 GMT]] Body:0xc0003f5b80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2b40 TLS:<nil>}
I1025 09:31:52.790412  564255 retry.go:31] will retry after 1.422648887s: Temporary Error: unexpected response code: 503
I1025 09:31:54.216965  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c037432-cc28-4c53-a150-de116c9945ce] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:54 GMT]] Body:0xc0003f5c80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2c80 TLS:<nil>}
I1025 09:31:54.217052  564255 retry.go:31] will retry after 971.266118ms: Temporary Error: unexpected response code: 503
I1025 09:31:55.191652  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7eed7c18-1148-4057-97f5-a9be71836054] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:55 GMT]] Body:0xc000b91000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2780 TLS:<nil>}
I1025 09:31:55.191720  564255 retry.go:31] will retry after 2.674969072s: Temporary Error: unexpected response code: 503
I1025 09:31:57.871701  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad85e544-bbbf-422b-8218-41989fdd58ba] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:31:57 GMT]] Body:0xc0007c6440 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207a40 TLS:<nil>}
I1025 09:31:57.871791  564255 retry.go:31] will retry after 2.30015207s: Temporary Error: unexpected response code: 503
I1025 09:32:00.175249  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6479439d-51ed-49b5-8c46-40bc40f272fd] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:32:00 GMT]] Body:0xc0003f5d80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a28c0 TLS:<nil>}
I1025 09:32:00.175339  564255 retry.go:31] will retry after 6.549421924s: Temporary Error: unexpected response code: 503
I1025 09:32:06.729324  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cab0b7a7-053c-4c09-add9-dcddc48ba865] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:32:06 GMT]] Body:0xc0003f5e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207b80 TLS:<nil>}
I1025 09:32:06.729390  564255 retry.go:31] will retry after 11.730850377s: Temporary Error: unexpected response code: 503
I1025 09:32:18.464566  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5f0d3dc0-9d38-46bd-b3b2-4cb563bfe60c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:32:18 GMT]] Body:0xc0008107c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000207cc0 TLS:<nil>}
I1025 09:32:18.464669  564255 retry.go:31] will retry after 11.965273563s: Temporary Error: unexpected response code: 503
I1025 09:32:30.433210  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[20a94a59-e46d-4464-9dd3-a7d6424c1bfa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:32:30 GMT]] Body:0xc0007c6540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002d2dc0 TLS:<nil>}
I1025 09:32:30.433295  564255 retry.go:31] will retry after 26.154664531s: Temporary Error: unexpected response code: 503
I1025 09:32:56.592193  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[70b378ec-3ab4-49b8-8f9c-549dcd5ab00d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:32:56 GMT]] Body:0xc000b91180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2a00 TLS:<nil>}
I1025 09:32:56.592261  564255 retry.go:31] will retry after 21.738888564s: Temporary Error: unexpected response code: 503
I1025 09:33:18.337598  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[560530c9-86c8-42be-88b2-21edfa16454e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:33:18 GMT]] Body:0xc000b91200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0017a2b40 TLS:<nil>}
I1025 09:33:18.337666  564255 retry.go:31] will retry after 54.624280948s: Temporary Error: unexpected response code: 503
I1025 09:34:12.967085  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[612fa6c7-3799-4f8d-93a8-e94ff9945235] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:34:12 GMT]] Body:0xc000b90300 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000368000 TLS:<nil>}
I1025 09:34:12.967161  564255 retry.go:31] will retry after 43.503665393s: Temporary Error: unexpected response code: 503
I1025 09:34:56.474724  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d4e5b101-96fd-42ee-b1cd-8140d182c46a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:34:56 GMT]] Body:0xc000d80180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I1025 09:34:56.474815  564255 retry.go:31] will retry after 1m11.757118543s: Temporary Error: unexpected response code: 503
I1025 09:36:08.237423  564255 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[aa80d6d7-1f11-4a89-aea6-5006b84db1e5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 25 Oct 2025 09:36:08 GMT]] Body:0xc000b90340 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I1025 09:36:08.237512  564255 retry.go:31] will retry after 1m20.029906905s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-013051
helpers_test.go:243: (dbg) docker inspect functional-013051:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef",
	        "Created": "2025-10-25T09:28:36.077114686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:28:36.123634247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/hostname",
	        "HostsPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/hosts",
	        "LogPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef-json.log",
	        "Name": "/functional-013051",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-013051:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-013051",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef",
	                "LowerDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d-init/diff:/var/lib/docker/overlay2/1190de5deda7780238bce4a73ddfc02156e176e9e10c91e09b0cabf2c2920025/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-013051",
	                "Source": "/var/lib/docker/volumes/functional-013051/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-013051",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-013051",
	                "name.minikube.sigs.k8s.io": "functional-013051",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08aeb02fe70c2344c17476f59fa3a862309d2fe4746baa818542aff3665bee67",
	            "SandboxKey": "/var/run/docker/netns/08aeb02fe70c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-013051": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:b0:ad:6a:26:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e394c09242ef04c4ed8cb204e73fc44353107b0c1768c85597d6cb7faf470fe3",
	                    "EndpointID": "8bc54d57dd0e63f3306d71ad45d6390e39ffad45fca21cb3f248295817782990",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-013051",
	                        "37cb36089ec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-013051 -n functional-013051
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-013051 logs -n 25: (1.073104899s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image      │ functional-013051 image save --daemon kicbase/echo-server:functional-013051 --alsologtostderr                                    │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ mount      │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdspecific-port334459019/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh        │ functional-013051 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ start      │ -p functional-013051 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker                      │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh        │ functional-013051 ssh sudo cat /etc/ssl/certs/503346.pem                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh sudo cat /usr/share/ca-certificates/503346.pem                                                             │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh sudo cat /etc/ssl/certs/51391683.0                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh -- ls -la /mount-9p                                                                                        │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh sudo cat /etc/ssl/certs/5033462.pem                                                                        │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh sudo umount -f /mount-9p                                                                                   │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh        │ functional-013051 ssh sudo cat /usr/share/ca-certificates/5033462.pem                                                            │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh findmnt -T /mount1                                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount      │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount1 --alsologtostderr -v=1               │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount      │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount2 --alsologtostderr -v=1               │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount      │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount3 --alsologtostderr -v=1               │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ dashboard  │ --url --port 36195 -p functional-013051 --alsologtostderr -v=1                                                                   │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh        │ functional-013051 ssh findmnt -T /mount1                                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh findmnt -T /mount2                                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh findmnt -T /mount3                                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ mount      │ -p functional-013051 --kill=true                                                                                                 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ docker-env │ functional-013051 docker-env                                                                                                     │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ docker-env │ functional-013051 docker-env                                                                                                     │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh        │ functional-013051 ssh sudo cat /etc/test/nested/copy/503346/hosts                                                                │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	└────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:31:47
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:31:47.446424  562892 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:31:47.446775  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.446788  562892 out.go:374] Setting ErrFile to fd 2...
	I1025 09:31:47.446793  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.447140  562892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:31:47.447920  562892 out.go:368] Setting JSON to false
	I1025 09:31:47.449085  562892 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4455,"bootTime":1761380252,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:31:47.449161  562892 start.go:141] virtualization: kvm guest
	I1025 09:31:47.450767  562892 out.go:179] * [functional-013051] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:31:47.452422  562892 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:31:47.452455  562892 notify.go:220] Checking for updates...
	I1025 09:31:47.455160  562892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:31:47.456673  562892 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:31:47.458138  562892 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:31:47.459719  562892 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:31:47.461132  562892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:31:47.463069  562892 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:31:47.463887  562892 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:31:47.489930  562892 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:31:47.490043  562892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:31:47.549908  562892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:31:47.539959558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:31:47.550038  562892 docker.go:318] overlay module found
	I1025 09:31:47.551897  562892 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:31:47.553183  562892 start.go:305] selected driver: docker
	I1025 09:31:47.553200  562892 start.go:925] validating driver "docker" against &{Name:functional-013051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-013051 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:31:47.553297  562892 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:31:47.555201  562892 out.go:203] 
	W1025 09:31:47.556645  562892 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:31:47.558026  562892 out.go:203] 
	
	
	==> Docker <==
	Oct 25 09:32:03 functional-013051 dockerd[7301]: time="2025-10-25T09:32:03.133751452Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:32:03 functional-013051 dockerd[7301]: time="2025-10-25T09:32:03.161792428Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:08 functional-013051 dockerd[7301]: time="2025-10-25T09:32:08.163546210Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:12 functional-013051 dockerd[7301]: time="2025-10-25T09:32:12.174532719Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:20 functional-013051 dockerd[7301]: time="2025-10-25T09:32:20.163813512Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:26 functional-013051 dockerd[7301]: time="2025-10-25T09:32:26.088615083Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:32:26 functional-013051 dockerd[7301]: time="2025-10-25T09:32:26.119405213Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:26 functional-013051 dockerd[7301]: time="2025-10-25T09:32:26.139029233Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:32:26 functional-013051 dockerd[7301]: time="2025-10-25T09:32:26.170934974Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:31 functional-013051 dockerd[7301]: time="2025-10-25T09:32:31.177263106Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:53 functional-013051 dockerd[7301]: time="2025-10-25T09:32:53.234565178Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:53 functional-013051 cri-dockerd[8056]: time="2025-10-25T09:32:53Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 25 09:33:12 functional-013051 dockerd[7301]: time="2025-10-25T09:33:12.168655104Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:12 functional-013051 dockerd[7301]: time="2025-10-25T09:33:12.265915793Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:14 functional-013051 dockerd[7301]: time="2025-10-25T09:33:14.087774312Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:33:14 functional-013051 dockerd[7301]: time="2025-10-25T09:33:14.119728812Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:16 functional-013051 dockerd[7301]: time="2025-10-25T09:33:16.085766709Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:33:16 functional-013051 dockerd[7301]: time="2025-10-25T09:33:16.119636153Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:18 functional-013051 dockerd[7301]: time="2025-10-25T09:34:18.185127229Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.089152295Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.118911082Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.135360854Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.163181698Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:42 functional-013051 dockerd[7301]: time="2025-10-25T09:34:42.170329744Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:44 functional-013051 dockerd[7301]: time="2025-10-25T09:34:44.165122921Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	43cb8a90fefa4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   aef06a9da48bd       busybox-mount                               default
	7f275a1a40576       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   a02a715d473e1       hello-node-connect-7d85dfc575-tpmn9         default
	42166a976ed99       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   8cdb97bb8c0f0       hello-node-75c85bcc94-rf646                 default
	03b0c7a9b5d6b       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   2                   aea3729f69afc       coredns-66bc5c9577-fjhbs                    kube-system
	ef579063752b0       fc25172553d79                                                                                         5 minutes ago       Running             kube-proxy                2                   6f9767b6042c9       kube-proxy-5krpb                            kube-system
	6e409a0f9fe46       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       4                   1273d9f0bab96       storage-provisioner                         kube-system
	34f580b14e1c2       c3994bc696102                                                                                         5 minutes ago       Running             kube-apiserver            0                   aa8024719e6c1       kube-apiserver-functional-013051            kube-system
	1f79bcf8431dd       7dd6aaa1717ab                                                                                         5 minutes ago       Running             kube-scheduler            3                   ff63caa84d3be       kube-scheduler-functional-013051            kube-system
	fc41b6b617f2b       c80c8dbafe7dd                                                                                         5 minutes ago       Running             kube-controller-manager   3                   34785bb49483f       kube-controller-manager-functional-013051   kube-system
	51d592f1f5f28       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      2                   77ada2e139625       etcd-functional-013051                      kube-system
	c2f84e8a1b7dd       c80c8dbafe7dd                                                                                         5 minutes ago       Exited              kube-controller-manager   2                   654f972c5877b       kube-controller-manager-functional-013051   kube-system
	dacc4d625f3e5       7dd6aaa1717ab                                                                                         5 minutes ago       Exited              kube-scheduler            2                   c619257243882       kube-scheduler-functional-013051            kube-system
	4e03e52b9d86e       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       3                   64d440ce2ea7e       storage-provisioner                         kube-system
	b285170b61930       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   b7402314fb54a       coredns-66bc5c9577-fjhbs                    kube-system
	9eb254736ef0a       5f1f5298c888d                                                                                         6 minutes ago       Exited              etcd                      1                   7e87f0f902f11       etcd-functional-013051                      kube-system
	af9ce40902859       fc25172553d79                                                                                         6 minutes ago       Exited              kube-proxy                1                   fbff289d0e4e0       kube-proxy-5krpb                            kube-system
	
	
	==> coredns [03b0c7a9b5d6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54283 - 46851 "HINFO IN 2641288560107263801.2003115260268779043. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017540076s
	
	
	==> coredns [b285170b6193] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51027 - 52382 "HINFO IN 774946281068283468.250012538811656753. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.01770873s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-013051
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-013051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-013051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_28_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:28:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-013051
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:36:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:35:14 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:35:14 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:35:14 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:35:14 +0000   Sat, 25 Oct 2025 09:28:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-013051
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2b6fb497-e3ec-4e05-b4ac-ce48db73d933
	  Boot ID:                    2fda8ac7-743b-4d90-8011-17dbcec8d3ad
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rf646                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  default                     hello-node-connect-7d85dfc575-tpmn9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	  default                     mysql-5bb876957f-7kwsc                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     4m58s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	  kube-system                 coredns-66bc5c9577-fjhbs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m52s
	  kube-system                 etcd-functional-013051                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m57s
	  kube-system                 kube-apiserver-functional-013051              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-functional-013051     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 kube-proxy-5krpb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 kube-scheduler-functional-013051              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m57s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-zv5rm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xhprw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m50s                  kube-proxy       
	  Normal   Starting                 5m39s                  kube-proxy       
	  Normal   Starting                 6m39s                  kube-proxy       
	  Normal   Starting                 7m57s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m57s                  kubelet          Node functional-013051 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m57s                  kubelet          Node functional-013051 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m57s                  kubelet          Node functional-013051 status is now: NodeHasSufficientPID
	  Normal   NodeReady                7m54s                  kubelet          Node functional-013051 status is now: NodeReady
	  Normal   RegisteredNode           7m53s                  node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	  Warning  ContainerGCFailed        6m57s                  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           6m37s                  node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	  Normal   Starting                 5m43s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node functional-013051 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node functional-013051 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node functional-013051 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5m38s                  node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 35 d3 1d 1a 69 08 06
	[  +0.384426] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 a9 47 60 b0 27 08 06
	[  +0.023599] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 c3 01 22 23 63 08 06
	[ +20.439263] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 06 8b 23 bb 13 08 06
	[  +7.760078] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a b7 37 cd 89 06 08 06
	[  +0.000495] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[Oct25 09:19] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff ca 69 af 92 55 8d 08 06
	[  +0.000498] IPv4: martian source 10.244.0.34 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[  +0.000602] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 af 9c e7 7c 87 08 06
	[Oct25 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 cb c2 16 78 86 08 06
	[  +0.000817] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 f6 aa 45 b9 f4 08 06
	[Oct25 09:30] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 a3 a0 a4 aa f3 08 06
	[Oct25 09:31] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 d1 a1 fc a6 37 08 06
	
	
	==> etcd [51d592f1f5f2] <==
	{"level":"warn","ts":"2025-10-25T09:31:08.857828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.864155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.871668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.878442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.889783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.899494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.905666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.912812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.918993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.925374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.932412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.940077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.947481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.954319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.966875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.972981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.979417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.987209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.993570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.999999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.006551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.020484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.037539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.044995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.105160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	
	
	==> etcd [9eb254736ef0] <==
	{"level":"warn","ts":"2025-10-25T09:30:09.360670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.367952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.382877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.386671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.394573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.402218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.463502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:30:52.339693Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:30:52.339794Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-013051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T09:30:52.339909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:30:59.341974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:30:59.342083Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.342128Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-25T09:30:59.342189Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342156Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T09:30:59.342210Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342153Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342215Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342228Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:30:59.342229Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-25T09:30:59.342240Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.345601Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T09:30:59.345685Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.345721Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T09:30:59.345750Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-013051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:36:50 up  1:19,  0 user,  load average: 0.13, 0.36, 1.21
	Linux functional-013051 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [34f580b14e1c] <==
	I1025 09:31:09.562526       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:31:09.565056       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:31:09.568342       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:31:09.578085       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:31:09.585324       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:31:09.585359       1 policy_source.go:240] refreshing policies
	I1025 09:31:09.590033       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:31:10.091993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:31:10.091993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:31:10.463492       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:31:11.227520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:31:11.270149       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:31:11.305380       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:31:11.312276       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:31:13.040012       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:31:13.139175       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:31:25.345920       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.200.127"}
	I1025 09:31:29.540464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:31:29.657011       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.158.117"}
	I1025 09:31:30.766047       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.232.211"}
	I1025 09:31:31.701227       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.234.37"}
	I1025 09:31:50.298033       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:31:50.425022       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.97.206"}
	I1025 09:31:50.437956       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.12.84"}
	I1025 09:31:52.622672       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.144.143"}
	
	
	==> kube-controller-manager [c2f84e8a1b7d] <==
	
	
	==> kube-controller-manager [fc41b6b617f2] <==
	I1025 09:31:12.882006       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:31:12.886221       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:31:12.886278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:31:12.886304       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:31:12.886314       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:31:12.886334       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:31:12.886342       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:31:12.886364       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:31:12.886380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:31:12.886443       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:31:12.886456       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:31:12.886383       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:31:12.886926       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:31:12.888246       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:31:12.890349       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:31:12.890371       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:31:12.890413       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:31:12.892783       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:31:12.913176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:31:50.355880       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.360458       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.365297       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.365527       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.371057       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.377645       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [af9ce4090285] <==
	I1025 09:30:08.507039       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:30:08.577598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1025 09:30:09.919080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-013051\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1025 09:30:10.978284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:30:10.978330       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:30:10.978447       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:30:11.003822       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:30:11.003880       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:30:11.010702       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:30:11.011136       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:30:11.011158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:30:11.012740       1 config.go:200] "Starting service config controller"
	I1025 09:30:11.012774       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:30:11.012778       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:30:11.012793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:30:11.012974       1 config.go:309] "Starting node config controller"
	I1025 09:30:11.012985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:30:11.012992       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:30:11.013122       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:30:11.013134       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:30:11.113207       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:30:11.113379       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:30:11.113381       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ef579063752b] <==
	I1025 09:31:10.679294       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:31:10.761118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:31:10.861943       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:31:10.862014       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:31:10.862120       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:31:10.885745       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:31:10.885797       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:31:10.891536       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:31:10.892417       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:31:10.892444       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:10.894437       1 config.go:200] "Starting service config controller"
	I1025 09:31:10.894460       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:31:10.894491       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:31:10.894518       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:31:10.894531       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:31:10.894532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:31:10.894548       1 config.go:309] "Starting node config controller"
	I1025 09:31:10.894559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:31:10.894566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:31:10.995487       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:31:10.995616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:31:10.995644       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f79bcf8431d] <==
	I1025 09:31:08.125332       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:31:09.491175       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:31:09.491310       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:31:09.491353       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:31:09.491405       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:31:09.504937       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:31:09.504963       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:09.506810       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:09.506853       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:09.507062       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:31:09.507117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:31:09.607929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [dacc4d625f3e] <==
	I1025 09:31:05.216517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Oct 25 09:35:49 functional-013051 kubelet[9237]: E1025 09:35:49.078185    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:35:50 functional-013051 kubelet[9237]: E1025 09:35:50.072008    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:35:51 functional-013051 kubelet[9237]: E1025 09:35:51.069702    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:35:55 functional-013051 kubelet[9237]: E1025 09:35:55.072006    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:35:59 functional-013051 kubelet[9237]: E1025 09:35:59.071232    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:36:02 functional-013051 kubelet[9237]: E1025 09:36:02.070819    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:36:04 functional-013051 kubelet[9237]: E1025 09:36:04.071307    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:36:06 functional-013051 kubelet[9237]: E1025 09:36:06.069665    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:36:06 functional-013051 kubelet[9237]: E1025 09:36:06.071549    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:36:13 functional-013051 kubelet[9237]: E1025 09:36:13.071214    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:36:13 functional-013051 kubelet[9237]: E1025 09:36:13.071217    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:36:15 functional-013051 kubelet[9237]: E1025 09:36:15.071234    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:36:18 functional-013051 kubelet[9237]: E1025 09:36:18.069841    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:36:21 functional-013051 kubelet[9237]: E1025 09:36:21.071402    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:36:26 functional-013051 kubelet[9237]: E1025 09:36:26.071827    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:36:26 functional-013051 kubelet[9237]: E1025 09:36:26.071906    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:36:30 functional-013051 kubelet[9237]: E1025 09:36:30.071770    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:36:31 functional-013051 kubelet[9237]: E1025 09:36:31.070040    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:36:35 functional-013051 kubelet[9237]: E1025 09:36:35.071281    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:36:37 functional-013051 kubelet[9237]: E1025 09:36:37.078544    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:36:41 functional-013051 kubelet[9237]: E1025 09:36:41.071744    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:36:42 functional-013051 kubelet[9237]: E1025 09:36:42.069563    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:36:42 functional-013051 kubelet[9237]: E1025 09:36:42.071254    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:36:48 functional-013051 kubelet[9237]: E1025 09:36:48.071959    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:36:49 functional-013051 kubelet[9237]: E1025 09:36:49.071008    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	
	
	==> storage-provisioner [4e03e52b9d86] <==
	I1025 09:30:34.194836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:30:34.202655       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:30:34.202713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:30:34.205046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:37.660498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:41.920930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:45.519728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:48.573284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.595314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.599969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:30:51.600134       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:30:51.600286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-013051_182f8721-bf79-463f-bebb-81336bb17881!
	I1025 09:30:51.600284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce306265-d55a-4881-b854-cbfdcc1fb794", APIVersion:"v1", ResourceVersion:"573", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-013051_182f8721-bf79-463f-bebb-81336bb17881 became leader
	W1025 09:30:51.602762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.606323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:30:51.700634       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-013051_182f8721-bf79-463f-bebb-81336bb17881!
	
	
	==> storage-provisioner [6e409a0f9fe4] <==
	W1025 09:36:25.076497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:27.079946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:27.084340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:29.087620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:29.093172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:31.096024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:31.101173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:33.104493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:33.108523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:35.111814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:35.116725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:37.120067       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:37.124041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:39.127874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:39.131919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:41.135564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:41.139544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:43.142989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:43.147061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:45.150458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:45.154459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:47.159431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:47.163496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:49.166821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:49.171254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-013051 -n functional-013051
helpers_test.go:269: (dbg) Run:  kubectl --context functional-013051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw: exit status 1 (89.357009ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:42 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://43cb8a90fefa4ad849aabaf21c6c964ea7968749adeb6d5c8671a98b0779c7e1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:31:44 +0000
	      Finished:     Sat, 25 Oct 2025 09:31:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l59mx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-l59mx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m9s  default-scheduler  Successfully assigned default/busybox-mount to functional-013051
	  Normal  Pulling    5m9s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m8s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.438s (1.438s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m7s  kubelet            Created container: mount-munger
	  Normal  Started    5m7s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-7kwsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:52 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrpmp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrpmp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  4m58s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-7kwsc to functional-013051
	  Normal   Pulling    2m7s (x5 over 4m58s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m7s (x5 over 4m58s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m7s (x5 over 4m58s)  kubelet            Error: ErrImagePull
	  Warning  Failed     62s (x15 over 4m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    9s (x19 over 4m58s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:30 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dcmzn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dcmzn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m20s                  default-scheduler  Successfully assigned default/nginx-svc to functional-013051
	  Warning  Failed     3m58s (x2 over 5m20s)  kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m33s (x5 over 5m20s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     2m33s (x5 over 5m20s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m33s (x3 over 5m7s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    14s (x21 over 5m19s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     14s (x21 over 5m19s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:36 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbsxr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kbsxr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m14s                 default-scheduler  Successfully assigned default/sp-pod to functional-013051
	  Normal   Pulling    2m9s (x5 over 5m14s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m9s (x5 over 5m14s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m9s (x5 over 5m14s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x20 over 5m14s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     9s (x20 over 5m14s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-zv5rm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xhprw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.06s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [6ffac042-5787-464c-a578-dcac28a2a9c3] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003238227s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-013051 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-013051 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-013051 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-013051 apply -f testdata/storage-provisioner/pod.yaml
I1025 09:31:36.557295  503346 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7d430e1f-7d32-4d96-8f2f-002b060f8c85] Pending
helpers_test.go:352: "sp-pod" [7d430e1f-7d32-4d96-8f2f-002b060f8c85] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-013051 -n functional-013051
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-25 09:37:36.897626374 +0000 UTC m=+1318.457445672
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-013051 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-013051 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-013051/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:31:36 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbsxr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-kbsxr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-013051
Normal   Pulling    2m54s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     2m54s (x5 over 5m59s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m54s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     54s (x20 over 5m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    42s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-013051 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-013051 logs sp-pod -n default: exit status 1 (72.154413ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-013051 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-013051
helpers_test.go:243: (dbg) docker inspect functional-013051:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef",
	        "Created": "2025-10-25T09:28:36.077114686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:28:36.123634247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/hostname",
	        "HostsPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/hosts",
	        "LogPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef-json.log",
	        "Name": "/functional-013051",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-013051:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-013051",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef",
	                "LowerDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d-init/diff:/var/lib/docker/overlay2/1190de5deda7780238bce4a73ddfc02156e176e9e10c91e09b0cabf2c2920025/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-013051",
	                "Source": "/var/lib/docker/volumes/functional-013051/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-013051",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-013051",
	                "name.minikube.sigs.k8s.io": "functional-013051",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08aeb02fe70c2344c17476f59fa3a862309d2fe4746baa818542aff3665bee67",
	            "SandboxKey": "/var/run/docker/netns/08aeb02fe70c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-013051": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:b0:ad:6a:26:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e394c09242ef04c4ed8cb204e73fc44353107b0c1768c85597d6cb7faf470fe3",
	                    "EndpointID": "8bc54d57dd0e63f3306d71ad45d6390e39ffad45fca21cb3f248295817782990",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-013051",
	                        "37cb36089ec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-013051 -n functional-013051
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-013051 logs -n 25: (1.074973136s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-013051 ssh sudo umount -f /mount-9p                                                                     │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh            │ functional-013051 ssh sudo cat /usr/share/ca-certificates/5033462.pem                                              │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh findmnt -T /mount1                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount          │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount1 --alsologtostderr -v=1 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount          │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount2 --alsologtostderr -v=1 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount          │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount3 --alsologtostderr -v=1 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-013051 --alsologtostderr -v=1                                                     │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh            │ functional-013051 ssh findmnt -T /mount1                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh findmnt -T /mount2                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh findmnt -T /mount3                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ mount          │ -p functional-013051 --kill=true                                                                                   │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ docker-env     │ functional-013051 docker-env                                                                                       │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ docker-env     │ functional-013051 docker-env                                                                                       │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh sudo cat /etc/test/nested/copy/503346/hosts                                                  │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ image          │ functional-013051 image ls --format short --alsologtostderr                                                        │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls --format yaml --alsologtostderr                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ ssh            │ functional-013051 ssh pgrep buildkitd                                                                              │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ image          │ functional-013051 image build -t localhost/my-image:functional-013051 testdata/build --alsologtostderr             │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls                                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls --format json --alsologtostderr                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls --format table --alsologtostderr                                                        │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ update-context │ functional-013051 update-context --alsologtostderr -v=2                                                            │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ update-context │ functional-013051 update-context --alsologtostderr -v=2                                                            │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ update-context │ functional-013051 update-context --alsologtostderr -v=2                                                            │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:31:47
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:31:47.446424  562892 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:31:47.446775  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.446788  562892 out.go:374] Setting ErrFile to fd 2...
	I1025 09:31:47.446793  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.447140  562892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:31:47.447920  562892 out.go:368] Setting JSON to false
	I1025 09:31:47.449085  562892 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4455,"bootTime":1761380252,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:31:47.449161  562892 start.go:141] virtualization: kvm guest
	I1025 09:31:47.450767  562892 out.go:179] * [functional-013051] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:31:47.452422  562892 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:31:47.452455  562892 notify.go:220] Checking for updates...
	I1025 09:31:47.455160  562892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:31:47.456673  562892 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:31:47.458138  562892 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:31:47.459719  562892 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:31:47.461132  562892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:31:47.463069  562892 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:31:47.463887  562892 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:31:47.489930  562892 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:31:47.490043  562892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:31:47.549908  562892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:31:47.539959558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:31:47.550038  562892 docker.go:318] overlay module found
	I1025 09:31:47.551897  562892 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:31:47.553183  562892 start.go:305] selected driver: docker
	I1025 09:31:47.553200  562892 start.go:925] validating driver "docker" against &{Name:functional-013051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-013051 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:31:47.553297  562892 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:31:47.555201  562892 out.go:203] 
	W1025 09:31:47.556645  562892 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:31:47.558026  562892 out.go:203] 
	
	
	==> Docker <==
	Oct 25 09:32:26 functional-013051 dockerd[7301]: time="2025-10-25T09:32:26.170934974Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:31 functional-013051 dockerd[7301]: time="2025-10-25T09:32:31.177263106Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:53 functional-013051 dockerd[7301]: time="2025-10-25T09:32:53.234565178Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:53 functional-013051 cri-dockerd[8056]: time="2025-10-25T09:32:53Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 25 09:33:12 functional-013051 dockerd[7301]: time="2025-10-25T09:33:12.168655104Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:12 functional-013051 dockerd[7301]: time="2025-10-25T09:33:12.265915793Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:14 functional-013051 dockerd[7301]: time="2025-10-25T09:33:14.087774312Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:33:14 functional-013051 dockerd[7301]: time="2025-10-25T09:33:14.119728812Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:16 functional-013051 dockerd[7301]: time="2025-10-25T09:33:16.085766709Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:33:16 functional-013051 dockerd[7301]: time="2025-10-25T09:33:16.119636153Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:18 functional-013051 dockerd[7301]: time="2025-10-25T09:34:18.185127229Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.089152295Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.118911082Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.135360854Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.163181698Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:42 functional-013051 dockerd[7301]: time="2025-10-25T09:34:42.170329744Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:44 functional-013051 dockerd[7301]: time="2025-10-25T09:34:44.165122921Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:03 functional-013051 dockerd[7301]: time="2025-10-25T09:37:03.245768557Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:03 functional-013051 cri-dockerd[8056]: time="2025-10-25T09:37:03Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 25 09:37:23 functional-013051 dockerd[7301]: time="2025-10-25T09:37:23.172007726Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:26 functional-013051 dockerd[7301]: time="2025-10-25T09:37:26.174290049Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:26 functional-013051 dockerd[7301]: time="2025-10-25T09:37:26.191559564Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:37:26 functional-013051 dockerd[7301]: time="2025-10-25T09:37:26.223367701Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:33 functional-013051 dockerd[7301]: time="2025-10-25T09:37:33.089440410Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:37:33 functional-013051 dockerd[7301]: time="2025-10-25T09:37:33.122997735Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	43cb8a90fefa4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   aef06a9da48bd       busybox-mount                               default
	7f275a1a40576       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   a02a715d473e1       hello-node-connect-7d85dfc575-tpmn9         default
	42166a976ed99       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   8cdb97bb8c0f0       hello-node-75c85bcc94-rf646                 default
	03b0c7a9b5d6b       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   aea3729f69afc       coredns-66bc5c9577-fjhbs                    kube-system
	ef579063752b0       fc25172553d79                                                                                         6 minutes ago       Running             kube-proxy                2                   6f9767b6042c9       kube-proxy-5krpb                            kube-system
	6e409a0f9fe46       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       4                   1273d9f0bab96       storage-provisioner                         kube-system
	34f580b14e1c2       c3994bc696102                                                                                         6 minutes ago       Running             kube-apiserver            0                   aa8024719e6c1       kube-apiserver-functional-013051            kube-system
	1f79bcf8431dd       7dd6aaa1717ab                                                                                         6 minutes ago       Running             kube-scheduler            3                   ff63caa84d3be       kube-scheduler-functional-013051            kube-system
	fc41b6b617f2b       c80c8dbafe7dd                                                                                         6 minutes ago       Running             kube-controller-manager   3                   34785bb49483f       kube-controller-manager-functional-013051   kube-system
	51d592f1f5f28       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      2                   77ada2e139625       etcd-functional-013051                      kube-system
	c2f84e8a1b7dd       c80c8dbafe7dd                                                                                         6 minutes ago       Exited              kube-controller-manager   2                   654f972c5877b       kube-controller-manager-functional-013051   kube-system
	dacc4d625f3e5       7dd6aaa1717ab                                                                                         6 minutes ago       Exited              kube-scheduler            2                   c619257243882       kube-scheduler-functional-013051            kube-system
	4e03e52b9d86e       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       3                   64d440ce2ea7e       storage-provisioner                         kube-system
	b285170b61930       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   b7402314fb54a       coredns-66bc5c9577-fjhbs                    kube-system
	9eb254736ef0a       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      1                   7e87f0f902f11       etcd-functional-013051                      kube-system
	af9ce40902859       fc25172553d79                                                                                         7 minutes ago       Exited              kube-proxy                1                   fbff289d0e4e0       kube-proxy-5krpb                            kube-system
	
	
	==> coredns [03b0c7a9b5d6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54283 - 46851 "HINFO IN 2641288560107263801.2003115260268779043. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017540076s
	
	
	==> coredns [b285170b6193] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51027 - 52382 "HINFO IN 774946281068283468.250012538811656753. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.01770873s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-013051
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-013051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-013051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_28_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:28:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-013051
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:37:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-013051
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2b6fb497-e3ec-4e05-b4ac-ce48db73d933
	  Boot ID:                    2fda8ac7-743b-4d90-8011-17dbcec8d3ad
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rf646                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     hello-node-connect-7d85dfc575-tpmn9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-5bb876957f-7kwsc                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m46s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-fjhbs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m40s
	  kube-system                 etcd-functional-013051                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m45s
	  kube-system                 kube-apiserver-functional-013051              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-controller-manager-functional-013051     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-proxy-5krpb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 kube-scheduler-functional-013051              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m45s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-zv5rm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xhprw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m38s                  kube-proxy       
	  Normal   Starting                 6m27s                  kube-proxy       
	  Normal   Starting                 7m27s                  kube-proxy       
	  Normal   Starting                 8m45s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m45s                  kubelet          Node functional-013051 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m45s                  kubelet          Node functional-013051 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m45s                  kubelet          Node functional-013051 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8m42s                  kubelet          Node functional-013051 status is now: NodeReady
	  Normal   RegisteredNode           8m41s                  node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	  Warning  ContainerGCFailed        7m45s                  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           7m25s                  node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	  Normal   Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node functional-013051 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node functional-013051 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m31s (x7 over 6m31s)  kubelet          Node functional-013051 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m26s                  node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 35 d3 1d 1a 69 08 06
	[  +0.384426] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 a9 47 60 b0 27 08 06
	[  +0.023599] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 c3 01 22 23 63 08 06
	[ +20.439263] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 06 8b 23 bb 13 08 06
	[  +7.760078] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a b7 37 cd 89 06 08 06
	[  +0.000495] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[Oct25 09:19] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff ca 69 af 92 55 8d 08 06
	[  +0.000498] IPv4: martian source 10.244.0.34 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[  +0.000602] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 af 9c e7 7c 87 08 06
	[Oct25 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 cb c2 16 78 86 08 06
	[  +0.000817] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 f6 aa 45 b9 f4 08 06
	[Oct25 09:30] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 a3 a0 a4 aa f3 08 06
	[Oct25 09:31] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 d1 a1 fc a6 37 08 06
	
	
	==> etcd [51d592f1f5f2] <==
	{"level":"warn","ts":"2025-10-25T09:31:08.857828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.864155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.871668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.878442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.889783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.899494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.905666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.912812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.918993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.925374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.932412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.940077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.947481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.954319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.966875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.972981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.979417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.987209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.993570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.999999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.006551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.020484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.037539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.044995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.105160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	
	
	==> etcd [9eb254736ef0] <==
	{"level":"warn","ts":"2025-10-25T09:30:09.360670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.367952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.382877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.386671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.394573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.402218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.463502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:30:52.339693Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:30:52.339794Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-013051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T09:30:52.339909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:30:59.341974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:30:59.342083Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.342128Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-25T09:30:59.342189Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342156Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T09:30:59.342210Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342153Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342215Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342228Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:30:59.342229Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-25T09:30:59.342240Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.345601Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T09:30:59.345685Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.345721Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T09:30:59.345750Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-013051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:37:38 up  1:20,  0 user,  load average: 0.06, 0.31, 1.15
	Linux functional-013051 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [34f580b14e1c] <==
	I1025 09:31:09.562526       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 09:31:09.565056       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:31:09.568342       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:31:09.578085       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:31:09.585324       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:31:09.585359       1 policy_source.go:240] refreshing policies
	I1025 09:31:09.590033       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:31:10.091993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:31:10.091993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:31:10.463492       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:31:11.227520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:31:11.270149       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:31:11.305380       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:31:11.312276       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:31:13.040012       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:31:13.139175       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:31:25.345920       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.200.127"}
	I1025 09:31:29.540464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:31:29.657011       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.158.117"}
	I1025 09:31:30.766047       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.232.211"}
	I1025 09:31:31.701227       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.234.37"}
	I1025 09:31:50.298033       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:31:50.425022       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.97.206"}
	I1025 09:31:50.437956       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.12.84"}
	I1025 09:31:52.622672       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.144.143"}
	
	
	==> kube-controller-manager [c2f84e8a1b7d] <==
	
	
	==> kube-controller-manager [fc41b6b617f2] <==
	I1025 09:31:12.882006       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:31:12.886221       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:31:12.886278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:31:12.886304       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:31:12.886314       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:31:12.886334       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:31:12.886342       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:31:12.886364       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:31:12.886380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:31:12.886443       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:31:12.886456       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:31:12.886383       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:31:12.886926       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:31:12.888246       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:31:12.890349       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:31:12.890371       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:31:12.890413       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:31:12.892783       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:31:12.913176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:31:50.355880       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.360458       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.365297       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.365527       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.371057       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.377645       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [af9ce4090285] <==
	I1025 09:30:08.507039       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:30:08.577598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1025 09:30:09.919080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-013051\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1025 09:30:10.978284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:30:10.978330       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:30:10.978447       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:30:11.003822       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:30:11.003880       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:30:11.010702       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:30:11.011136       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:30:11.011158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:30:11.012740       1 config.go:200] "Starting service config controller"
	I1025 09:30:11.012774       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:30:11.012778       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:30:11.012793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:30:11.012974       1 config.go:309] "Starting node config controller"
	I1025 09:30:11.012985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:30:11.012992       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:30:11.013122       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:30:11.013134       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:30:11.113207       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:30:11.113379       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:30:11.113381       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ef579063752b] <==
	I1025 09:31:10.679294       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:31:10.761118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:31:10.861943       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:31:10.862014       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:31:10.862120       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:31:10.885745       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:31:10.885797       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:31:10.891536       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:31:10.892417       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:31:10.892444       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:10.894437       1 config.go:200] "Starting service config controller"
	I1025 09:31:10.894460       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:31:10.894491       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:31:10.894518       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:31:10.894531       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:31:10.894532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:31:10.894548       1 config.go:309] "Starting node config controller"
	I1025 09:31:10.894559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:31:10.894566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:31:10.995487       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:31:10.995616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:31:10.995644       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f79bcf8431d] <==
	I1025 09:31:08.125332       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:31:09.491175       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:31:09.491310       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:31:09.491353       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:31:09.491405       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:31:09.504937       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:31:09.504963       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:09.506810       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:09.506853       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:09.507062       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:31:09.507117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:31:09.607929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [dacc4d625f3e] <==
	I1025 09:31:05.216517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Oct 25 09:37:08 functional-013051 kubelet[9237]: E1025 09:37:08.070035    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:37:08 functional-013051 kubelet[9237]: E1025 09:37:08.071805    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:37:11 functional-013051 kubelet[9237]: E1025 09:37:11.072004    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:37:15 functional-013051 kubelet[9237]: E1025 09:37:15.071419    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:37:18 functional-013051 kubelet[9237]: E1025 09:37:18.071497    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:37:21 functional-013051 kubelet[9237]: E1025 09:37:21.072307    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:37:23 functional-013051 kubelet[9237]: E1025 09:37:23.174467    9237 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 25 09:37:23 functional-013051 kubelet[9237]: E1025 09:37:23.174534    9237 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 25 09:37:23 functional-013051 kubelet[9237]: E1025 09:37:23.174706    9237 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(7d430e1f-7d32-4d96-8f2f-002b060f8c85): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:37:23 functional-013051 kubelet[9237]: E1025 09:37:23.174753    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.176967    9237 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.177027    9237 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.177263    9237 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-7kwsc_default(ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.177322    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.226083    9237 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.226137    9237 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.226222    9237 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-xhprw_kubernetes-dashboard(845035bb-b40a-4128-8f0d-985421b282db): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:37:26 functional-013051 kubelet[9237]: E1025 09:37:26.226259    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:37:29 functional-013051 kubelet[9237]: E1025 09:37:29.071287    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:37:33 functional-013051 kubelet[9237]: E1025 09:37:33.125706    9237 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:37:33 functional-013051 kubelet[9237]: E1025 09:37:33.125769    9237 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:37:33 functional-013051 kubelet[9237]: E1025 09:37:33.125858    9237 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm_kubernetes-dashboard(91c2efdf-5f82-45fd-a706-b773dbe83fe5): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 25 09:37:33 functional-013051 kubelet[9237]: E1025 09:37:33.125900    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:37:37 functional-013051 kubelet[9237]: E1025 09:37:37.071572    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:37:38 functional-013051 kubelet[9237]: E1025 09:37:38.069558    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	
	
	==> storage-provisioner [4e03e52b9d86] <==
	I1025 09:30:34.194836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:30:34.202655       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:30:34.202713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:30:34.205046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:37.660498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:41.920930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:45.519728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:48.573284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.595314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.599969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:30:51.600134       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:30:51.600286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-013051_182f8721-bf79-463f-bebb-81336bb17881!
	I1025 09:30:51.600284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce306265-d55a-4881-b854-cbfdcc1fb794", APIVersion:"v1", ResourceVersion:"573", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-013051_182f8721-bf79-463f-bebb-81336bb17881 became leader
	W1025 09:30:51.602762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.606323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:30:51.700634       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-013051_182f8721-bf79-463f-bebb-81336bb17881!
	
	
	==> storage-provisioner [6e409a0f9fe4] <==
	W1025 09:37:13.263787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:15.267426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:15.273491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:17.277126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:17.280936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:19.284502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:19.290394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:21.293810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:21.297989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:23.301634       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:23.305724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:25.308980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:25.313881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:27.317811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:27.323751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:29.326971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:29.332703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:31.336226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:31.340261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:33.344112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:33.349312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:35.352793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:35.357543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:37.361171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:37:37.365891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-013051 -n functional-013051
helpers_test.go:269: (dbg) Run:  kubectl --context functional-013051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw: exit status 1 (98.503575ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:42 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://43cb8a90fefa4ad849aabaf21c6c964ea7968749adeb6d5c8671a98b0779c7e1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:31:44 +0000
	      Finished:     Sat, 25 Oct 2025 09:31:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l59mx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-l59mx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m56s  default-scheduler  Successfully assigned default/busybox-mount to functional-013051
	  Normal  Pulling    5m56s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m55s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.438s (1.438s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m54s  kubelet            Created container: mount-munger
	  Normal  Started    5m54s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-7kwsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:52 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrpmp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrpmp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m46s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-7kwsc to functional-013051
	  Normal   Pulling    2m54s (x5 over 5m45s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m54s (x5 over 5m45s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m54s (x5 over 5m45s)  kubelet            Error: ErrImagePull
	  Warning  Failed     42s (x20 over 5m45s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    27s (x21 over 5m45s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:30 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dcmzn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dcmzn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m8s                   default-scheduler  Successfully assigned default/nginx-svc to functional-013051
	  Warning  Failed     4m45s (x2 over 6m7s)   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m20s (x5 over 6m7s)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m20s (x5 over 6m7s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m20s (x3 over 5m54s)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    61s (x21 over 6m6s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     61s (x21 over 6m6s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:36 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbsxr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kbsxr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m2s                  default-scheduler  Successfully assigned default/sp-pod to functional-013051
	  Normal   Pulling    2m56s (x5 over 6m1s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m56s (x5 over 6m1s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m56s (x5 over 6m1s)  kubelet            Error: ErrImagePull
	  Warning  Failed     56s (x20 over 6m1s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    44s (x21 over 6m1s)   kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-zv5rm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xhprw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw: exit status 1
E1025 09:38:07.897181  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.81s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-013051 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-7kwsc" [ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d] Pending
helpers_test.go:352: "mysql-5bb876957f-7kwsc" [ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E1025 09:33:07.897673  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:33:35.610685  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-013051 -n functional-013051
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-10-25 09:41:52.999810912 +0000 UTC m=+1574.559630220
functional_test.go:1804: (dbg) Run:  kubectl --context functional-013051 describe po mysql-5bb876957f-7kwsc -n default
functional_test.go:1804: (dbg) kubectl --context functional-013051 describe po mysql-5bb876957f-7kwsc -n default:
Name:             mysql-5bb876957f-7kwsc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-013051/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:31:52 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrpmp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wrpmp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-7kwsc to functional-013051
Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
functional_test.go:1804: (dbg) Run:  kubectl --context functional-013051 logs mysql-5bb876957f-7kwsc -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-013051 logs mysql-5bb876957f-7kwsc -n default: exit status 1 (70.721833ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-7kwsc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-013051 logs mysql-5bb876957f-7kwsc -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-013051
helpers_test.go:243: (dbg) docker inspect functional-013051:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef",
	        "Created": "2025-10-25T09:28:36.077114686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:28:36.123634247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/hostname",
	        "HostsPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/hosts",
	        "LogPath": "/var/lib/docker/containers/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef/37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef-json.log",
	        "Name": "/functional-013051",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-013051:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-013051",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37cb36089ec0162c59397ccfc2ad40013672e6d088996b9c2dc17afb70414eef",
	                "LowerDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d-init/diff:/var/lib/docker/overlay2/1190de5deda7780238bce4a73ddfc02156e176e9e10c91e09b0cabf2c2920025/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d04406a4b302bb2917be76804203f6ff3d086ab828ea58b1b3577000ed086d2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-013051",
	                "Source": "/var/lib/docker/volumes/functional-013051/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-013051",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-013051",
	                "name.minikube.sigs.k8s.io": "functional-013051",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08aeb02fe70c2344c17476f59fa3a862309d2fe4746baa818542aff3665bee67",
	            "SandboxKey": "/var/run/docker/netns/08aeb02fe70c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33173"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33174"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33177"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33175"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33176"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-013051": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:b0:ad:6a:26:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e394c09242ef04c4ed8cb204e73fc44353107b0c1768c85597d6cb7faf470fe3",
	                    "EndpointID": "8bc54d57dd0e63f3306d71ad45d6390e39ffad45fca21cb3f248295817782990",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-013051",
	                        "37cb36089ec0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-013051 -n functional-013051
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-013051 logs -n 25: (1.038346136s)
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-013051 ssh sudo umount -f /mount-9p                                                                     │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh            │ functional-013051 ssh sudo cat /usr/share/ca-certificates/5033462.pem                                              │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh findmnt -T /mount1                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount          │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount1 --alsologtostderr -v=1 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount          │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount2 --alsologtostderr -v=1 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ mount          │ -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount3 --alsologtostderr -v=1 │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-013051 --alsologtostderr -v=1                                                     │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ ssh            │ functional-013051 ssh findmnt -T /mount1                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh findmnt -T /mount2                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh findmnt -T /mount3                                                                           │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ mount          │ -p functional-013051 --kill=true                                                                                   │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │                     │
	│ docker-env     │ functional-013051 docker-env                                                                                       │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ docker-env     │ functional-013051 docker-env                                                                                       │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ ssh            │ functional-013051 ssh sudo cat /etc/test/nested/copy/503346/hosts                                                  │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:31 UTC │ 25 Oct 25 09:31 UTC │
	│ image          │ functional-013051 image ls --format short --alsologtostderr                                                        │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls --format yaml --alsologtostderr                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ ssh            │ functional-013051 ssh pgrep buildkitd                                                                              │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ image          │ functional-013051 image build -t localhost/my-image:functional-013051 testdata/build --alsologtostderr             │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls                                                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls --format json --alsologtostderr                                                         │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ image          │ functional-013051 image ls --format table --alsologtostderr                                                        │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ update-context │ functional-013051 update-context --alsologtostderr -v=2                                                            │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ update-context │ functional-013051 update-context --alsologtostderr -v=2                                                            │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ update-context │ functional-013051 update-context --alsologtostderr -v=2                                                            │ functional-013051 │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:31:47
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:31:47.446424  562892 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:31:47.446775  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.446788  562892 out.go:374] Setting ErrFile to fd 2...
	I1025 09:31:47.446793  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.447140  562892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:31:47.447920  562892 out.go:368] Setting JSON to false
	I1025 09:31:47.449085  562892 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4455,"bootTime":1761380252,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:31:47.449161  562892 start.go:141] virtualization: kvm guest
	I1025 09:31:47.450767  562892 out.go:179] * [functional-013051] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:31:47.452422  562892 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:31:47.452455  562892 notify.go:220] Checking for updates...
	I1025 09:31:47.455160  562892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:31:47.456673  562892 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:31:47.458138  562892 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:31:47.459719  562892 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:31:47.461132  562892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:31:47.463069  562892 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:31:47.463887  562892 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:31:47.489930  562892 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:31:47.490043  562892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:31:47.549908  562892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:31:47.539959558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:31:47.550038  562892 docker.go:318] overlay module found
	I1025 09:31:47.551897  562892 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:31:47.553183  562892 start.go:305] selected driver: docker
	I1025 09:31:47.553200  562892 start.go:925] validating driver "docker" against &{Name:functional-013051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-013051 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:31:47.553297  562892 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:31:47.555201  562892 out.go:203] 
	W1025 09:31:47.556645  562892 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:31:47.558026  562892 out.go:203] 
	
	
	==> Docker <==
	Oct 25 09:32:26 functional-013051 dockerd[7301]: time="2025-10-25T09:32:26.170934974Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:31 functional-013051 dockerd[7301]: time="2025-10-25T09:32:31.177263106Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:53 functional-013051 dockerd[7301]: time="2025-10-25T09:32:53.234565178Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:32:53 functional-013051 cri-dockerd[8056]: time="2025-10-25T09:32:53Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 25 09:33:12 functional-013051 dockerd[7301]: time="2025-10-25T09:33:12.168655104Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:12 functional-013051 dockerd[7301]: time="2025-10-25T09:33:12.265915793Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:14 functional-013051 dockerd[7301]: time="2025-10-25T09:33:14.087774312Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:33:14 functional-013051 dockerd[7301]: time="2025-10-25T09:33:14.119728812Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:33:16 functional-013051 dockerd[7301]: time="2025-10-25T09:33:16.085766709Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:33:16 functional-013051 dockerd[7301]: time="2025-10-25T09:33:16.119636153Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:18 functional-013051 dockerd[7301]: time="2025-10-25T09:34:18.185127229Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.089152295Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.118911082Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.135360854Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:34:41 functional-013051 dockerd[7301]: time="2025-10-25T09:34:41.163181698Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:42 functional-013051 dockerd[7301]: time="2025-10-25T09:34:42.170329744Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:34:44 functional-013051 dockerd[7301]: time="2025-10-25T09:34:44.165122921Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:03 functional-013051 dockerd[7301]: time="2025-10-25T09:37:03.245768557Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:03 functional-013051 cri-dockerd[8056]: time="2025-10-25T09:37:03Z" level=info msg="Stop pulling image docker.io/nginx:alpine: alpine: Pulling from library/nginx"
	Oct 25 09:37:23 functional-013051 dockerd[7301]: time="2025-10-25T09:37:23.172007726Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:26 functional-013051 dockerd[7301]: time="2025-10-25T09:37:26.174290049Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:26 functional-013051 dockerd[7301]: time="2025-10-25T09:37:26.191559564Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Oct 25 09:37:26 functional-013051 dockerd[7301]: time="2025-10-25T09:37:26.223367701Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 25 09:37:33 functional-013051 dockerd[7301]: time="2025-10-25T09:37:33.089440410Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 25 09:37:33 functional-013051 dockerd[7301]: time="2025-10-25T09:37:33.122997735Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	43cb8a90fefa4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   aef06a9da48bd       busybox-mount                               default
	7f275a1a40576       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   a02a715d473e1       hello-node-connect-7d85dfc575-tpmn9         default
	42166a976ed99       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   8cdb97bb8c0f0       hello-node-75c85bcc94-rf646                 default
	03b0c7a9b5d6b       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   2                   aea3729f69afc       coredns-66bc5c9577-fjhbs                    kube-system
	ef579063752b0       fc25172553d79                                                                                         10 minutes ago      Running             kube-proxy                2                   6f9767b6042c9       kube-proxy-5krpb                            kube-system
	6e409a0f9fe46       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       4                   1273d9f0bab96       storage-provisioner                         kube-system
	34f580b14e1c2       c3994bc696102                                                                                         10 minutes ago      Running             kube-apiserver            0                   aa8024719e6c1       kube-apiserver-functional-013051            kube-system
	1f79bcf8431dd       7dd6aaa1717ab                                                                                         10 minutes ago      Running             kube-scheduler            3                   ff63caa84d3be       kube-scheduler-functional-013051            kube-system
	fc41b6b617f2b       c80c8dbafe7dd                                                                                         10 minutes ago      Running             kube-controller-manager   3                   34785bb49483f       kube-controller-manager-functional-013051   kube-system
	51d592f1f5f28       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      2                   77ada2e139625       etcd-functional-013051                      kube-system
	c2f84e8a1b7dd       c80c8dbafe7dd                                                                                         10 minutes ago      Exited              kube-controller-manager   2                   654f972c5877b       kube-controller-manager-functional-013051   kube-system
	dacc4d625f3e5       7dd6aaa1717ab                                                                                         10 minutes ago      Exited              kube-scheduler            2                   c619257243882       kube-scheduler-functional-013051            kube-system
	4e03e52b9d86e       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       3                   64d440ce2ea7e       storage-provisioner                         kube-system
	b285170b61930       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   1                   b7402314fb54a       coredns-66bc5c9577-fjhbs                    kube-system
	9eb254736ef0a       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      1                   7e87f0f902f11       etcd-functional-013051                      kube-system
	af9ce40902859       fc25172553d79                                                                                         11 minutes ago      Exited              kube-proxy                1                   fbff289d0e4e0       kube-proxy-5krpb                            kube-system
	
	
	==> coredns [03b0c7a9b5d6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54283 - 46851 "HINFO IN 2641288560107263801.2003115260268779043. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017540076s
	
	
	==> coredns [b285170b6193] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51027 - 52382 "HINFO IN 774946281068283468.250012538811656753. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.01770873s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-013051
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-013051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=functional-013051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_28_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:28:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-013051
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:41:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:37:07 +0000   Sat, 25 Oct 2025 09:28:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-013051
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2b6fb497-e3ec-4e05-b4ac-ce48db73d933
	  Boot ID:                    2fda8ac7-743b-4d90-8011-17dbcec8d3ad
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-rf646                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-tpmn9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-7kwsc                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-fjhbs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-013051                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kube-apiserver-functional-013051              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-013051     200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-5krpb                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-013051              100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-zv5rm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xhprw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                kubelet          Node functional-013051 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                kubelet          Node functional-013051 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                kubelet          Node functional-013051 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                kubelet          Node functional-013051 status is now: NodeReady
	  Normal   RegisteredNode           12m                node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	  Warning  ContainerGCFailed        12m                kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           11m                node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-013051 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-013051 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-013051 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-013051 event: Registered Node functional-013051 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 35 d3 1d 1a 69 08 06
	[  +0.384426] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 a9 47 60 b0 27 08 06
	[  +0.023599] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 c3 01 22 23 63 08 06
	[ +20.439263] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 06 8b 23 bb 13 08 06
	[  +7.760078] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a b7 37 cd 89 06 08 06
	[  +0.000495] IPv4: martian source 10.244.0.27 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[Oct25 09:19] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff ca 69 af 92 55 8d 08 06
	[  +0.000498] IPv4: martian source 10.244.0.34 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[  +0.000602] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 af 9c e7 7c 87 08 06
	[Oct25 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 cb c2 16 78 86 08 06
	[  +0.000817] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 f6 aa 45 b9 f4 08 06
	[Oct25 09:30] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 a3 a0 a4 aa f3 08 06
	[Oct25 09:31] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 d1 a1 fc a6 37 08 06
	
	
	==> etcd [51d592f1f5f2] <==
	{"level":"warn","ts":"2025-10-25T09:31:08.878442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.889783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.899494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.905666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.912812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.918993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.925374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.932412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.940077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.947481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.954319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.966875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.972981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.979417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.987209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.993570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:08.999999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.006551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.020484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.037539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.044995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:31:09.105160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41214","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:41:08.606978Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1375}
	{"level":"info","ts":"2025-10-25T09:41:08.627155Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1375,"took":"19.798597ms","hash":2166164199,"current-db-size-bytes":4030464,"current-db-size":"4.0 MB","current-db-size-in-use-bytes":2101248,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-10-25T09:41:08.627220Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2166164199,"revision":1375,"compact-revision":-1}
	
	
	==> etcd [9eb254736ef0] <==
	{"level":"warn","ts":"2025-10-25T09:30:09.360670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.367952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.382877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.386671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.394573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.402218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:30:09.463502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41140","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:30:52.339693Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:30:52.339794Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-013051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T09:30:52.339909Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:30:59.341974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:30:59.342083Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.342128Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-25T09:30:59.342189Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342156Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T09:30:59.342210Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342153Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342215Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:30:59.342228Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:30:59.342229Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-10-25T09:30:59.342240Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.345601Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T09:30:59.345685Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:30:59.345721Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T09:30:59.345750Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-013051","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:41:54 up  1:24,  0 user,  load average: 0.11, 0.21, 0.90
	Linux functional-013051 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [34f580b14e1c] <==
	I1025 09:31:09.565056       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 09:31:09.568342       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 09:31:09.578085       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:31:09.585324       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:31:09.585359       1 policy_source.go:240] refreshing policies
	I1025 09:31:09.590033       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:31:10.091993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:31:10.091993       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:31:10.463492       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:31:11.227520       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:31:11.270149       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:31:11.305380       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:31:11.312276       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:31:13.040012       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 09:31:13.139175       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:31:25.345920       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.200.127"}
	I1025 09:31:29.540464       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:31:29.657011       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.158.117"}
	I1025 09:31:30.766047       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.232.211"}
	I1025 09:31:31.701227       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.234.37"}
	I1025 09:31:50.298033       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 09:31:50.425022       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.97.206"}
	I1025 09:31:50.437956       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.12.84"}
	I1025 09:31:52.622672       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.144.143"}
	I1025 09:41:09.492329       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c2f84e8a1b7d] <==
	
	
	==> kube-controller-manager [fc41b6b617f2] <==
	I1025 09:31:12.882006       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 09:31:12.886221       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:31:12.886278       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:31:12.886304       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:31:12.886314       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:31:12.886334       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:31:12.886342       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 09:31:12.886364       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:31:12.886380       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:31:12.886443       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:31:12.886456       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:31:12.886383       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:31:12.886926       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:31:12.888246       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:31:12.890349       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:31:12.890371       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 09:31:12.890413       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:31:12.892783       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:31:12.913176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:31:50.355880       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.360458       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.365297       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.365527       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.371057       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1025 09:31:50.377645       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [af9ce4090285] <==
	I1025 09:30:08.507039       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:30:08.577598       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1025 09:30:09.919080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"functional-013051\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1025 09:30:10.978284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:30:10.978330       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:30:10.978447       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:30:11.003822       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:30:11.003880       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:30:11.010702       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:30:11.011136       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:30:11.011158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:30:11.012740       1 config.go:200] "Starting service config controller"
	I1025 09:30:11.012774       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:30:11.012778       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:30:11.012793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:30:11.012974       1 config.go:309] "Starting node config controller"
	I1025 09:30:11.012985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:30:11.012992       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:30:11.013122       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:30:11.013134       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:30:11.113207       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:30:11.113379       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:30:11.113381       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ef579063752b] <==
	I1025 09:31:10.679294       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:31:10.761118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:31:10.861943       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:31:10.862014       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:31:10.862120       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:31:10.885745       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:31:10.885797       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:31:10.891536       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:31:10.892417       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:31:10.892444       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:10.894437       1 config.go:200] "Starting service config controller"
	I1025 09:31:10.894460       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:31:10.894491       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:31:10.894518       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:31:10.894531       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:31:10.894532       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:31:10.894548       1 config.go:309] "Starting node config controller"
	I1025 09:31:10.894559       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:31:10.894566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:31:10.995487       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:31:10.995616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:31:10.995644       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1f79bcf8431d] <==
	I1025 09:31:08.125332       1 serving.go:386] Generated self-signed cert in-memory
	W1025 09:31:09.491175       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 09:31:09.491310       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 09:31:09.491353       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 09:31:09.491405       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 09:31:09.504937       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:31:09.504963       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:31:09.506810       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:09.506853       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:31:09.507062       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:31:09.507117       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:31:09.607929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [dacc4d625f3e] <==
	I1025 09:31:05.216517       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Oct 25 09:40:48 functional-013051 kubelet[9237]: E1025 09:40:48.071923    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:40:58 functional-013051 kubelet[9237]: E1025 09:40:58.071090    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:40:58 functional-013051 kubelet[9237]: E1025 09:40:58.071169    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:41:00 functional-013051 kubelet[9237]: E1025 09:41:00.069290    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:41:00 functional-013051 kubelet[9237]: E1025 09:41:00.071516    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:41:03 functional-013051 kubelet[9237]: E1025 09:41:03.071895    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:41:09 functional-013051 kubelet[9237]: E1025 09:41:09.072118    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:41:12 functional-013051 kubelet[9237]: E1025 09:41:12.069474    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:41:13 functional-013051 kubelet[9237]: E1025 09:41:13.071136    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:41:13 functional-013051 kubelet[9237]: E1025 09:41:13.071151    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:41:18 functional-013051 kubelet[9237]: E1025 09:41:18.071989    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:41:21 functional-013051 kubelet[9237]: E1025 09:41:21.071100    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:41:26 functional-013051 kubelet[9237]: E1025 09:41:26.070952    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:41:27 functional-013051 kubelet[9237]: E1025 09:41:27.077324    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:41:28 functional-013051 kubelet[9237]: E1025 09:41:28.071522    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:41:33 functional-013051 kubelet[9237]: E1025 09:41:33.071843    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:41:35 functional-013051 kubelet[9237]: E1025 09:41:35.078046    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:41:39 functional-013051 kubelet[9237]: E1025 09:41:39.077414    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:41:40 functional-013051 kubelet[9237]: E1025 09:41:40.071742    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:41:40 functional-013051 kubelet[9237]: E1025 09:41:40.071753    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	Oct 25 09:41:45 functional-013051 kubelet[9237]: E1025 09:41:45.079543    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="27ff61fe-8b0f-4927-be0c-91a13ab78c07"
	Oct 25 09:41:46 functional-013051 kubelet[9237]: E1025 09:41:46.071717    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xhprw" podUID="845035bb-b40a-4128-8f0d-985421b282db"
	Oct 25 09:41:51 functional-013051 kubelet[9237]: E1025 09:41:51.069416    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7d430e1f-7d32-4d96-8f2f-002b060f8c85"
	Oct 25 09:41:51 functional-013051 kubelet[9237]: E1025 09:41:51.071237    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7kwsc" podUID="ca0612f8-fbf9-4d36-93f1-8eb650dbbf3d"
	Oct 25 09:41:54 functional-013051 kubelet[9237]: E1025 09:41:54.071435    9237 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zv5rm" podUID="91c2efdf-5f82-45fd-a706-b773dbe83fe5"
	
	
	==> storage-provisioner [4e03e52b9d86] <==
	I1025 09:30:34.194836       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:30:34.202655       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:30:34.202713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:30:34.205046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:37.660498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:41.920930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:45.519728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:48.573284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.595314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.599969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:30:51.600134       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:30:51.600286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-013051_182f8721-bf79-463f-bebb-81336bb17881!
	I1025 09:30:51.600284       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce306265-d55a-4881-b854-cbfdcc1fb794", APIVersion:"v1", ResourceVersion:"573", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-013051_182f8721-bf79-463f-bebb-81336bb17881 became leader
	W1025 09:30:51.602762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:30:51.606323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:30:51.700634       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-013051_182f8721-bf79-463f-bebb-81336bb17881!
	
	
	==> storage-provisioner [6e409a0f9fe4] <==
	W1025 09:41:30.301145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:32.304680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:32.308780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:34.312486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:34.316753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:36.320048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:36.325093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:38.328812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:38.332705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:40.335964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:40.341071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:42.344788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:42.348473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:44.352964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:44.357788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:46.361343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:46.365873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:48.368486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:48.373640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:50.376964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:50.381193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:52.384825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:52.388751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:54.392732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:54.397001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-013051 -n functional-013051
helpers_test.go:269: (dbg) Run:  kubectl --context functional-013051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw: exit status 1 (91.811134ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:42 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://43cb8a90fefa4ad849aabaf21c6c964ea7968749adeb6d5c8671a98b0779c7e1
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 25 Oct 2025 09:31:44 +0000
	      Finished:     Sat, 25 Oct 2025 09:31:44 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l59mx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-l59mx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-013051
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.438s (1.438s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-7kwsc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:52 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wrpmp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wrpmp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-7kwsc to functional-013051
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:30 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dcmzn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-dcmzn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/nginx-svc to functional-013051
	  Warning  Failed     9m2s (x2 over 10m)   kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m37s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m37s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m37s (x3 over 10m)  kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    22s (x42 over 10m)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     22s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-013051/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:31:36 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kbsxr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-kbsxr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/sp-pod to functional-013051
	  Normal   Pulling    7m13s (x5 over 10m)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m13s (x5 over 10m)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m13s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    16s (x42 over 10m)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     16s (x42 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-zv5rm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xhprw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-013051 describe pod busybox-mount mysql-5bb876957f-7kwsc nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zv5rm kubernetes-dashboard-855c9754f9-xhprw: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-013051 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [27ff61fe-8b0f-4927-be0c-91a13ab78c07] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-013051 -n functional-013051
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-10-25 09:35:31.117146312 +0000 UTC m=+1192.676965611
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-013051 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-013051 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-013051/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:31:30 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dcmzn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-dcmzn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/nginx-svc to functional-013051
Warning  Failed     2m38s (x2 over 4m)    kubelet            Failed to pull image "docker.io/nginx:alpine": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    73s (x5 over 4m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     73s (x5 over 4m)      kubelet            Error: ErrImagePull
Warning  Failed     73s (x3 over 3m47s)   kubelet            Failed to pull image "docker.io/nginx:alpine": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    11s (x15 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     11s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-013051 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-013051 logs nginx-svc -n default: exit status 1 (72.096298ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-013051 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1025 09:35:31.256929  503346 retry.go:31] will retry after 1.875730601s: Temporary Error: Get "http:": http: no Host in request URL
I1025 09:35:33.133760  503346 retry.go:31] will retry after 3.095811687s: Temporary Error: Get "http:": http: no Host in request URL
I1025 09:35:36.229869  503346 retry.go:31] will retry after 3.543575194s: Temporary Error: Get "http:": http: no Host in request URL
I1025 09:35:39.774425  503346 retry.go:31] will retry after 10.168935215s: Temporary Error: Get "http:": http: no Host in request URL
I1025 09:35:49.944282  503346 retry.go:31] will retry after 13.944040862s: Temporary Error: Get "http:": http: no Host in request URL
I1025 09:36:03.888691  503346 retry.go:31] will retry after 33.461733767s: Temporary Error: Get "http:": http: no Host in request URL
I1025 09:36:37.351135  503346 retry.go:31] will retry after 40.713240794s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-013051 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
nginx-svc   LoadBalancer   10.104.232.211   10.104.232.211   80:30521/TCP   5m48s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (106.87s)

                                                
                                    
x
+
TestScheduledStopUnix (27.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-906952 --memory=3072 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-906952 --memory=3072 --driver=docker  --container-runtime=docker: (23.158603239s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-906952 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-906952 -n scheduled-stop-906952
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-906952 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 753463 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-10-25 10:05:22.246226443 +0000 UTC m=+2983.806045759
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-906952
helpers_test.go:243: (dbg) docker inspect scheduled-stop-906952:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0971553e621af2732971a6f07924c346cc0c9883d39431d1fcfb7e281bb60b92",
	        "Created": "2025-10-25T10:05:03.300019841Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 750467,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:05:03.334569117Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/0971553e621af2732971a6f07924c346cc0c9883d39431d1fcfb7e281bb60b92/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0971553e621af2732971a6f07924c346cc0c9883d39431d1fcfb7e281bb60b92/hostname",
	        "HostsPath": "/var/lib/docker/containers/0971553e621af2732971a6f07924c346cc0c9883d39431d1fcfb7e281bb60b92/hosts",
	        "LogPath": "/var/lib/docker/containers/0971553e621af2732971a6f07924c346cc0c9883d39431d1fcfb7e281bb60b92/0971553e621af2732971a6f07924c346cc0c9883d39431d1fcfb7e281bb60b92-json.log",
	        "Name": "/scheduled-stop-906952",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-906952:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "scheduled-stop-906952",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0971553e621af2732971a6f07924c346cc0c9883d39431d1fcfb7e281bb60b92",
	                "LowerDir": "/var/lib/docker/overlay2/5ab087fef1a17c4b0b6ea48b43d6cc432b667bfff4f70733718f6209689a7189-init/diff:/var/lib/docker/overlay2/1190de5deda7780238bce4a73ddfc02156e176e9e10c91e09b0cabf2c2920025/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ab087fef1a17c4b0b6ea48b43d6cc432b667bfff4f70733718f6209689a7189/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ab087fef1a17c4b0b6ea48b43d6cc432b667bfff4f70733718f6209689a7189/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ab087fef1a17c4b0b6ea48b43d6cc432b667bfff4f70733718f6209689a7189/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-906952",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-906952/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-906952",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-906952",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-906952",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a0de83825478453098acabc9592759b261d8a7573d379629dd5659c1767fc647",
	            "SandboxKey": "/var/run/docker/netns/a0de83825478",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33363"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33364"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33367"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33365"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33366"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-906952": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:b9:62:fb:18:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "904252805c60ab1a2fda46ba73aecd8431c2f6d0ae6968750f998f64a7769af8",
	                    "EndpointID": "c2cb6480b5fcaf8848c6a83a22189ed3eb4b79461e658055cec837771d121cb0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-906952",
	                        "0971553e621a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-906952 -n scheduled-stop-906952
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p scheduled-stop-906952 logs -n 25
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                            ARGS                                                                             │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-315751                                                                                                                                         │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:00 UTC │ 25 Oct 25 10:00 UTC │
	│ start   │ -p multinode-315751 --wait=true -v=5 --alsologtostderr                                                                                                      │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:00 UTC │ 25 Oct 25 10:01 UTC │
	│ node    │ list -p multinode-315751                                                                                                                                    │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:01 UTC │                     │
	│ node    │ multinode-315751 node delete m03                                                                                                                            │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:01 UTC │ 25 Oct 25 10:01 UTC │
	│ stop    │ multinode-315751 stop                                                                                                                                       │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:01 UTC │ 25 Oct 25 10:01 UTC │
	│ start   │ -p multinode-315751 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker                                                          │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:01 UTC │ 25 Oct 25 10:02 UTC │
	│ node    │ list -p multinode-315751                                                                                                                                    │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:02 UTC │                     │
	│ start   │ -p multinode-315751-m02 --driver=docker  --container-runtime=docker                                                                                         │ multinode-315751-m02  │ jenkins │ v1.37.0 │ 25 Oct 25 10:02 UTC │                     │
	│ start   │ -p multinode-315751-m03 --driver=docker  --container-runtime=docker                                                                                         │ multinode-315751-m03  │ jenkins │ v1.37.0 │ 25 Oct 25 10:02 UTC │ 25 Oct 25 10:03 UTC │
	│ node    │ add -p multinode-315751                                                                                                                                     │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:03 UTC │                     │
	│ delete  │ -p multinode-315751-m03                                                                                                                                     │ multinode-315751-m03  │ jenkins │ v1.37.0 │ 25 Oct 25 10:03 UTC │ 25 Oct 25 10:03 UTC │
	│ delete  │ -p multinode-315751                                                                                                                                         │ multinode-315751      │ jenkins │ v1.37.0 │ 25 Oct 25 10:03 UTC │ 25 Oct 25 10:03 UTC │
	│ start   │ -p test-preload-977271 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0 │ test-preload-977271   │ jenkins │ v1.37.0 │ 25 Oct 25 10:03 UTC │ 25 Oct 25 10:03 UTC │
	│ image   │ test-preload-977271 image pull gcr.io/k8s-minikube/busybox                                                                                                  │ test-preload-977271   │ jenkins │ v1.37.0 │ 25 Oct 25 10:03 UTC │ 25 Oct 25 10:03 UTC │
	│ stop    │ -p test-preload-977271                                                                                                                                      │ test-preload-977271   │ jenkins │ v1.37.0 │ 25 Oct 25 10:03 UTC │ 25 Oct 25 10:04 UTC │
	│ start   │ -p test-preload-977271 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker                                         │ test-preload-977271   │ jenkins │ v1.37.0 │ 25 Oct 25 10:04 UTC │ 25 Oct 25 10:04 UTC │
	│ image   │ test-preload-977271 image list                                                                                                                              │ test-preload-977271   │ jenkins │ v1.37.0 │ 25 Oct 25 10:04 UTC │ 25 Oct 25 10:04 UTC │
	│ delete  │ -p test-preload-977271                                                                                                                                      │ test-preload-977271   │ jenkins │ v1.37.0 │ 25 Oct 25 10:04 UTC │ 25 Oct 25 10:04 UTC │
	│ start   │ -p scheduled-stop-906952 --memory=3072 --driver=docker  --container-runtime=docker                                                                          │ scheduled-stop-906952 │ jenkins │ v1.37.0 │ 25 Oct 25 10:04 UTC │ 25 Oct 25 10:05 UTC │
	│ stop    │ -p scheduled-stop-906952 --schedule 5m                                                                                                                      │ scheduled-stop-906952 │ jenkins │ v1.37.0 │ 25 Oct 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-906952 --schedule 5m                                                                                                                      │ scheduled-stop-906952 │ jenkins │ v1.37.0 │ 25 Oct 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-906952 --schedule 5m                                                                                                                      │ scheduled-stop-906952 │ jenkins │ v1.37.0 │ 25 Oct 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-906952 --schedule 15s                                                                                                                     │ scheduled-stop-906952 │ jenkins │ v1.37.0 │ 25 Oct 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-906952 --schedule 15s                                                                                                                     │ scheduled-stop-906952 │ jenkins │ v1.37.0 │ 25 Oct 25 10:05 UTC │                     │
	│ stop    │ -p scheduled-stop-906952 --schedule 15s                                                                                                                     │ scheduled-stop-906952 │ jenkins │ v1.37.0 │ 25 Oct 25 10:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:04:58
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:04:58.672803  749897 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:04:58.672920  749897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:04:58.672923  749897 out.go:374] Setting ErrFile to fd 2...
	I1025 10:04:58.672926  749897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:04:58.673095  749897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 10:04:58.673560  749897 out.go:368] Setting JSON to false
	I1025 10:04:58.674481  749897 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":6447,"bootTime":1761380252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 10:04:58.674537  749897 start.go:141] virtualization: kvm guest
	I1025 10:04:58.677309  749897 out.go:179] * [scheduled-stop-906952] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 10:04:58.678776  749897 notify.go:220] Checking for updates...
	I1025 10:04:58.678793  749897 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 10:04:58.680250  749897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:04:58.681722  749897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 10:04:58.683167  749897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 10:04:58.684716  749897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 10:04:58.686152  749897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:04:58.687722  749897 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:04:58.711911  749897 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 10:04:58.712011  749897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:04:58.773224  749897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-25 10:04:58.761860508 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:04:58.773327  749897 docker.go:318] overlay module found
	I1025 10:04:58.775123  749897 out.go:179] * Using the docker driver based on user configuration
	I1025 10:04:58.776292  749897 start.go:305] selected driver: docker
	I1025 10:04:58.776305  749897 start.go:925] validating driver "docker" against <nil>
	I1025 10:04:58.776315  749897 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:04:58.776944  749897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:04:58.838379  749897 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-25 10:04:58.828854631 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 10:04:58.838538  749897 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:04:58.838790  749897 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 10:04:58.840609  749897 out.go:179] * Using Docker driver with root privileges
	I1025 10:04:58.841729  749897 cni.go:84] Creating CNI manager for ""
	I1025 10:04:58.841795  749897 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 10:04:58.841805  749897 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 10:04:58.841875  749897 start.go:349] cluster config:
	{Name:scheduled-stop-906952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-906952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:04:58.842981  749897 out.go:179] * Starting "scheduled-stop-906952" primary control-plane node in "scheduled-stop-906952" cluster
	I1025 10:04:58.844199  749897 cache.go:123] Beginning downloading kic base image for docker with docker
	I1025 10:04:58.845334  749897 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:04:58.846518  749897 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 10:04:58.846569  749897 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
	I1025 10:04:58.846590  749897 cache.go:58] Caching tarball of preloaded images
	I1025 10:04:58.846685  749897 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:04:58.846698  749897 preload.go:233] Found /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 10:04:58.846705  749897 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on docker
	I1025 10:04:58.847020  749897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/config.json ...
	I1025 10:04:58.847036  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/config.json: {Name:mk6846d1259b5793663d5050e7d195c364ec6019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:04:58.867992  749897 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:04:58.868008  749897 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:04:58.868025  749897 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:04:58.868062  749897 start.go:360] acquireMachinesLock for scheduled-stop-906952: {Name:mk5bc417a419a5415bb526badb83248d4a014274 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:04:58.868160  749897 start.go:364] duration metric: took 84.899µs to acquireMachinesLock for "scheduled-stop-906952"
	I1025 10:04:58.868180  749897 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-906952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-906952 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 10:04:58.868249  749897 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:04:58.870103  749897 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:04:58.870324  749897 start.go:159] libmachine.API.Create for "scheduled-stop-906952" (driver="docker")
	I1025 10:04:58.870350  749897 client.go:168] LocalClient.Create starting
	I1025 10:04:58.870433  749897 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem
	I1025 10:04:58.870459  749897 main.go:141] libmachine: Decoding PEM data...
	I1025 10:04:58.870471  749897 main.go:141] libmachine: Parsing certificate...
	I1025 10:04:58.870536  749897 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem
	I1025 10:04:58.870559  749897 main.go:141] libmachine: Decoding PEM data...
	I1025 10:04:58.870566  749897 main.go:141] libmachine: Parsing certificate...
	I1025 10:04:58.870914  749897 cli_runner.go:164] Run: docker network inspect scheduled-stop-906952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:04:58.888872  749897 cli_runner.go:211] docker network inspect scheduled-stop-906952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:04:58.888959  749897 network_create.go:284] running [docker network inspect scheduled-stop-906952] to gather additional debugging logs...
	I1025 10:04:58.888977  749897 cli_runner.go:164] Run: docker network inspect scheduled-stop-906952
	W1025 10:04:58.906248  749897 cli_runner.go:211] docker network inspect scheduled-stop-906952 returned with exit code 1
	I1025 10:04:58.906268  749897 network_create.go:287] error running [docker network inspect scheduled-stop-906952]: docker network inspect scheduled-stop-906952: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-906952 not found
	I1025 10:04:58.906291  749897 network_create.go:289] output of [docker network inspect scheduled-stop-906952]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-906952 not found
	
	** /stderr **
	I1025 10:04:58.906419  749897 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:04:58.924379  749897 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed3c1622b44b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ba:2f:2c:44:75} reservation:<nil>}
	I1025 10:04:58.924776  749897 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d5b7689ee125 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:98:33:d6:4b:18} reservation:<nil>}
	I1025 10:04:58.925109  749897 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a013addfef7b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:39:c6:7b:22:0d} reservation:<nil>}
	I1025 10:04:58.925525  749897 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014d7940}
	I1025 10:04:58.925544  749897 network_create.go:124] attempt to create docker network scheduled-stop-906952 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:04:58.925614  749897 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-906952 scheduled-stop-906952
	I1025 10:04:58.984685  749897 network_create.go:108] docker network scheduled-stop-906952 192.168.76.0/24 created
	I1025 10:04:58.984727  749897 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-906952" container
	I1025 10:04:58.984795  749897 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:04:59.003070  749897 cli_runner.go:164] Run: docker volume create scheduled-stop-906952 --label name.minikube.sigs.k8s.io=scheduled-stop-906952 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:04:59.023159  749897 oci.go:103] Successfully created a docker volume scheduled-stop-906952
	I1025 10:04:59.023244  749897 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-906952-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-906952 --entrypoint /usr/bin/test -v scheduled-stop-906952:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:04:59.402424  749897 oci.go:107] Successfully prepared a docker volume scheduled-stop-906952
	I1025 10:04:59.402452  749897 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 10:04:59.402471  749897 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:04:59.402545  749897 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-906952:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:05:03.225513  749897 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-906952:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.822913156s)
	I1025 10:05:03.225535  749897 kic.go:203] duration metric: took 3.823059608s to extract preloaded images to volume ...
	W1025 10:05:03.225640  749897 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1025 10:05:03.225681  749897 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1025 10:05:03.225718  749897 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:05:03.283759  749897 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-906952 --name scheduled-stop-906952 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-906952 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-906952 --network scheduled-stop-906952 --ip 192.168.76.2 --volume scheduled-stop-906952:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:05:03.550783  749897 cli_runner.go:164] Run: docker container inspect scheduled-stop-906952 --format={{.State.Running}}
	I1025 10:05:03.570671  749897 cli_runner.go:164] Run: docker container inspect scheduled-stop-906952 --format={{.State.Status}}
	I1025 10:05:03.589223  749897 cli_runner.go:164] Run: docker exec scheduled-stop-906952 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:05:03.642548  749897 oci.go:144] the created container "scheduled-stop-906952" has a running status.
	I1025 10:05:03.642572  749897 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa...
	I1025 10:05:04.053239  749897 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:05:04.079706  749897 cli_runner.go:164] Run: docker container inspect scheduled-stop-906952 --format={{.State.Status}}
	I1025 10:05:04.098887  749897 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:05:04.098901  749897 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-906952 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:05:04.143266  749897 cli_runner.go:164] Run: docker container inspect scheduled-stop-906952 --format={{.State.Status}}
	I1025 10:05:04.162726  749897 machine.go:93] provisionDockerMachine start ...
	I1025 10:05:04.162823  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:04.182321  749897 main.go:141] libmachine: Using SSH client type: native
	I1025 10:05:04.182562  749897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1025 10:05:04.182574  749897 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:05:04.326026  749897 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-906952
	
	I1025 10:05:04.326062  749897 ubuntu.go:182] provisioning hostname "scheduled-stop-906952"
	I1025 10:05:04.326172  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:04.344245  749897 main.go:141] libmachine: Using SSH client type: native
	I1025 10:05:04.344471  749897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1025 10:05:04.344478  749897 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-906952 && echo "scheduled-stop-906952" | sudo tee /etc/hostname
	I1025 10:05:04.496414  749897 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-906952
	
	I1025 10:05:04.496508  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:04.515594  749897 main.go:141] libmachine: Using SSH client type: native
	I1025 10:05:04.515837  749897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1025 10:05:04.515849  749897 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-906952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-906952/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-906952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:05:04.660094  749897 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:05:04.660116  749897 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-499776/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-499776/.minikube}
	I1025 10:05:04.660141  749897 ubuntu.go:190] setting up certificates
	I1025 10:05:04.660154  749897 provision.go:84] configureAuth start
	I1025 10:05:04.660220  749897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-906952
	I1025 10:05:04.678562  749897 provision.go:143] copyHostCerts
	I1025 10:05:04.678638  749897 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-499776/.minikube/ca.pem, removing ...
	I1025 10:05:04.678676  749897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-499776/.minikube/ca.pem
	I1025 10:05:04.678754  749897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-499776/.minikube/ca.pem (1082 bytes)
	I1025 10:05:04.678893  749897 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-499776/.minikube/cert.pem, removing ...
	I1025 10:05:04.678900  749897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-499776/.minikube/cert.pem
	I1025 10:05:04.678938  749897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-499776/.minikube/cert.pem (1123 bytes)
	I1025 10:05:04.679016  749897 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-499776/.minikube/key.pem, removing ...
	I1025 10:05:04.679019  749897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-499776/.minikube/key.pem
	I1025 10:05:04.679053  749897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-499776/.minikube/key.pem (1679 bytes)
	I1025 10:05:04.679119  749897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-499776/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-906952 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-906952]
	I1025 10:05:05.551840  749897 provision.go:177] copyRemoteCerts
	I1025 10:05:05.551905  749897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:05:05.551941  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:05.569859  749897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa Username:docker}
	I1025 10:05:05.672248  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:05:05.692417  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 10:05:05.711876  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:05:05.730508  749897 provision.go:87] duration metric: took 1.070340216s to configureAuth
	I1025 10:05:05.730534  749897 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:05:05.730753  749897 config.go:182] Loaded profile config "scheduled-stop-906952": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 10:05:05.730817  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:05.749247  749897 main.go:141] libmachine: Using SSH client type: native
	I1025 10:05:05.749456  749897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1025 10:05:05.749462  749897 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 10:05:05.892645  749897 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1025 10:05:05.892662  749897 ubuntu.go:71] root file system type: overlay
	I1025 10:05:05.892773  749897 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 10:05:05.892864  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:05.911863  749897 main.go:141] libmachine: Using SSH client type: native
	I1025 10:05:05.912089  749897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1025 10:05:05.912144  749897 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 10:05:06.065275  749897 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 10:05:06.065350  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:06.084281  749897 main.go:141] libmachine: Using SSH client type: native
	I1025 10:05:06.084482  749897 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33363 <nil> <nil>}
	I1025 10:05:06.084493  749897 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 10:05:07.234502  749897 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-10-08 12:15:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-10-25 10:05:06.062736297 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1025 10:05:07.234525  749897 machine.go:96] duration metric: took 3.07178643s to provisionDockerMachine
	I1025 10:05:07.234535  749897 client.go:171] duration metric: took 8.364181094s to LocalClient.Create
	I1025 10:05:07.234553  749897 start.go:167] duration metric: took 8.364231036s to libmachine.API.Create "scheduled-stop-906952"
	I1025 10:05:07.234559  749897 start.go:293] postStartSetup for "scheduled-stop-906952" (driver="docker")
	I1025 10:05:07.234567  749897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:05:07.234681  749897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:05:07.234713  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:07.252608  749897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa Username:docker}
	I1025 10:05:07.354648  749897 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:05:07.358359  749897 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:05:07.358375  749897 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:05:07.358385  749897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-499776/.minikube/addons for local assets ...
	I1025 10:05:07.358447  749897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-499776/.minikube/files for local assets ...
	I1025 10:05:07.358527  749897 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-499776/.minikube/files/etc/ssl/certs/5033462.pem -> 5033462.pem in /etc/ssl/certs
	I1025 10:05:07.358641  749897 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:05:07.366440  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/files/etc/ssl/certs/5033462.pem --> /etc/ssl/certs/5033462.pem (1708 bytes)
	I1025 10:05:07.386454  749897 start.go:296] duration metric: took 151.879808ms for postStartSetup
	I1025 10:05:07.386811  749897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-906952
	I1025 10:05:07.404020  749897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/config.json ...
	I1025 10:05:07.404297  749897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:05:07.404334  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:07.421498  749897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa Username:docker}
	I1025 10:05:07.519052  749897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:05:07.523804  749897 start.go:128] duration metric: took 8.655540036s to createHost
	I1025 10:05:07.523821  749897 start.go:83] releasing machines lock for "scheduled-stop-906952", held for 8.655654537s
	I1025 10:05:07.523893  749897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-906952
	I1025 10:05:07.540981  749897 ssh_runner.go:195] Run: cat /version.json
	I1025 10:05:07.541024  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:07.541053  749897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:05:07.541108  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:07.558714  749897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa Username:docker}
	I1025 10:05:07.559155  749897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa Username:docker}
	I1025 10:05:07.654826  749897 ssh_runner.go:195] Run: systemctl --version
	I1025 10:05:07.706984  749897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:05:07.711966  749897 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:05:07.712016  749897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:05:07.737352  749897 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 10:05:07.737371  749897 start.go:495] detecting cgroup driver to use...
	I1025 10:05:07.737399  749897 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:05:07.737509  749897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:05:07.751224  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1025 10:05:07.761544  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 10:05:07.770630  749897 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I1025 10:05:07.770688  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I1025 10:05:07.779534  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 10:05:07.788344  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 10:05:07.797111  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 10:05:07.805893  749897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:05:07.814219  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 10:05:07.823017  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 10:05:07.831468  749897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 10:05:07.840543  749897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:05:07.847776  749897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:05:07.854994  749897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:05:07.933139  749897 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 10:05:08.011944  749897 start.go:495] detecting cgroup driver to use...
	I1025 10:05:08.011988  749897 detect.go:190] detected "systemd" cgroup driver on host os
	I1025 10:05:08.012028  749897 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 10:05:08.025748  749897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:05:08.038179  749897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:05:08.058250  749897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:05:08.070987  749897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 10:05:08.083275  749897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:05:08.097062  749897 ssh_runner.go:195] Run: which cri-dockerd
	I1025 10:05:08.100748  749897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 10:05:08.109757  749897 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1025 10:05:08.123016  749897 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 10:05:08.206743  749897 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 10:05:08.286755  749897 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I1025 10:05:08.286863  749897 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I1025 10:05:08.300047  749897 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1025 10:05:08.311855  749897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:05:08.391818  749897 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 10:05:09.145859  749897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:05:09.158747  749897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 10:05:09.171747  749897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 10:05:09.186051  749897 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 10:05:09.271496  749897 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 10:05:09.355881  749897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:05:09.439526  749897 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 10:05:09.463153  749897 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1025 10:05:09.476132  749897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:05:09.559308  749897 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 10:05:09.629785  749897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 10:05:09.642314  749897 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 10:05:09.642365  749897 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 10:05:09.646162  749897 start.go:563] Will wait 60s for crictl version
	I1025 10:05:09.646208  749897 ssh_runner.go:195] Run: which crictl
	I1025 10:05:09.649569  749897 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:05:09.674608  749897 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.5.1
	RuntimeApiVersion:  v1
	I1025 10:05:09.674674  749897 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 10:05:09.700136  749897 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 10:05:09.727111  749897 out.go:252] * Preparing Kubernetes v1.34.1 on Docker 28.5.1 ...
	I1025 10:05:09.727191  749897 cli_runner.go:164] Run: docker network inspect scheduled-stop-906952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:05:09.743563  749897 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:05:09.747884  749897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:05:09.758169  749897 kubeadm.go:883] updating cluster {Name:scheduled-stop-906952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-906952 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:05:09.758289  749897 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
	I1025 10:05:09.758339  749897 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 10:05:09.778544  749897 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 10:05:09.778558  749897 docker.go:621] Images already preloaded, skipping extraction
	I1025 10:05:09.778628  749897 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 10:05:09.799488  749897 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 10:05:09.799508  749897 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:05:09.799518  749897 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 docker true true} ...
	I1025 10:05:09.799635  749897 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-906952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-906952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:05:09.799691  749897 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 10:05:09.851199  749897 cni.go:84] Creating CNI manager for ""
	I1025 10:05:09.851224  749897 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 10:05:09.851243  749897 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:05:09.851270  749897 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-906952 NodeName:scheduled-stop-906952 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:05:09.851391  749897 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "scheduled-stop-906952"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:05:09.851448  749897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:05:09.859729  749897 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:05:09.859781  749897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:05:09.867806  749897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1025 10:05:09.880818  749897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:05:09.893484  749897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1025 10:05:09.905933  749897 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:05:09.909519  749897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:05:09.919418  749897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:05:10.000138  749897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:05:10.026289  749897 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952 for IP: 192.168.76.2
	I1025 10:05:10.026303  749897 certs.go:195] generating shared ca certs ...
	I1025 10:05:10.026317  749897 certs.go:227] acquiring lock for ca certs: {Name:mk591f43cf4589df71f5cb0e6167ddf369a67a39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:10.026491  749897 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-499776/.minikube/ca.key
	I1025 10:05:10.026537  749897 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.key
	I1025 10:05:10.026545  749897 certs.go:257] generating profile certs ...
	I1025 10:05:10.026668  749897 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/client.key
	I1025 10:05:10.026687  749897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/client.crt with IP's: []
	I1025 10:05:10.289015  749897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/client.crt ...
	I1025 10:05:10.289034  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/client.crt: {Name:mkc297913342d534b8e6bdfd250579b61763b314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:10.289220  749897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/client.key ...
	I1025 10:05:10.289229  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/client.key: {Name:mkcb20a1ce5d534996179ce27fc92c72a61c988a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:10.289314  749897 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.key.a1bfe1d9
	I1025 10:05:10.289326  749897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.crt.a1bfe1d9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:05:10.369967  749897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.crt.a1bfe1d9 ...
	I1025 10:05:10.369983  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.crt.a1bfe1d9: {Name:mkba5e9dc7d7c75c8b3e2a075ae0e4d0f12f794a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:10.370147  749897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.key.a1bfe1d9 ...
	I1025 10:05:10.370155  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.key.a1bfe1d9: {Name:mk1d78f585d1414b824f3eb0655f0cf942165b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:10.370223  749897 certs.go:382] copying /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.crt.a1bfe1d9 -> /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.crt
	I1025 10:05:10.370296  749897 certs.go:386] copying /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.key.a1bfe1d9 -> /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.key
	I1025 10:05:10.370343  749897 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.key
	I1025 10:05:10.370354  749897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.crt with IP's: []
	I1025 10:05:10.827198  749897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.crt ...
	I1025 10:05:10.827217  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.crt: {Name:mka640579e15fb56ed7572d3d97395fc6a7a8f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:10.827412  749897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.key ...
	I1025 10:05:10.827420  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.key: {Name:mk4b08f570bb90128973d903d581ff20468c2066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:10.827657  749897 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/503346.pem (1338 bytes)
	W1025 10:05:10.827696  749897 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-499776/.minikube/certs/503346_empty.pem, impossibly tiny 0 bytes
	I1025 10:05:10.827702  749897 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:05:10.827721  749897 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:05:10.827740  749897 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:05:10.827762  749897 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/certs/key.pem (1679 bytes)
	I1025 10:05:10.827798  749897 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-499776/.minikube/files/etc/ssl/certs/5033462.pem (1708 bytes)
	I1025 10:05:10.828346  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:05:10.846975  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:05:10.864943  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:05:10.882732  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:05:10.900303  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:05:10.917874  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I1025 10:05:10.935187  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:05:10.952270  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/scheduled-stop-906952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:05:10.969991  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/files/etc/ssl/certs/5033462.pem --> /usr/share/ca-certificates/5033462.pem (1708 bytes)
	I1025 10:05:10.992167  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:05:11.010068  749897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-499776/.minikube/certs/503346.pem --> /usr/share/ca-certificates/503346.pem (1338 bytes)
	I1025 10:05:11.027806  749897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:05:11.040454  749897 ssh_runner.go:195] Run: openssl version
	I1025 10:05:11.046718  749897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/503346.pem && ln -fs /usr/share/ca-certificates/503346.pem /etc/ssl/certs/503346.pem"
	I1025 10:05:11.055155  749897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/503346.pem
	I1025 10:05:11.058999  749897 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:28 /usr/share/ca-certificates/503346.pem
	I1025 10:05:11.059055  749897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/503346.pem
	I1025 10:05:11.092867  749897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/503346.pem /etc/ssl/certs/51391683.0"
	I1025 10:05:11.101923  749897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5033462.pem && ln -fs /usr/share/ca-certificates/5033462.pem /etc/ssl/certs/5033462.pem"
	I1025 10:05:11.110485  749897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5033462.pem
	I1025 10:05:11.114206  749897 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:28 /usr/share/ca-certificates/5033462.pem
	I1025 10:05:11.114254  749897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5033462.pem
	I1025 10:05:11.147890  749897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5033462.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:05:11.156812  749897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:05:11.165453  749897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:05:11.169091  749897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:16 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:05:11.169144  749897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:05:11.203005  749897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:05:11.212532  749897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:05:11.216569  749897 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:05:11.216641  749897 kubeadm.go:400] StartCluster: {Name:scheduled-stop-906952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-906952 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:05:11.216742  749897 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 10:05:11.237043  749897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:05:11.245261  749897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:05:11.253227  749897 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:05:11.253268  749897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:05:11.261030  749897 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:05:11.261041  749897 kubeadm.go:157] found existing configuration files:
	
	I1025 10:05:11.261084  749897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:05:11.268994  749897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:05:11.269045  749897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:05:11.276332  749897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:05:11.284179  749897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:05:11.284231  749897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:05:11.291751  749897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:05:11.299432  749897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:05:11.299474  749897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:05:11.306933  749897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:05:11.314660  749897 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:05:11.314715  749897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:05:11.322504  749897 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:05:11.397645  749897 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1025 10:05:11.457452  749897 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:05:20.787775  749897 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:05:20.787843  749897 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:05:20.787974  749897 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:05:20.788045  749897 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1025 10:05:20.788090  749897 kubeadm.go:318] OS: Linux
	I1025 10:05:20.788161  749897 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:05:20.788226  749897 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:05:20.788270  749897 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:05:20.788307  749897 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:05:20.788368  749897 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:05:20.788433  749897 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:05:20.788501  749897 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:05:20.788567  749897 kubeadm.go:318] CGROUPS_IO: enabled
	I1025 10:05:20.788676  749897 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:05:20.788767  749897 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:05:20.788882  749897 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:05:20.788932  749897 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:05:20.791117  749897 out.go:252]   - Generating certificates and keys ...
	I1025 10:05:20.791208  749897 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:05:20.791259  749897 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:05:20.791327  749897 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:05:20.791392  749897 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:05:20.791455  749897 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:05:20.791518  749897 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:05:20.791628  749897 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:05:20.791759  749897 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-906952] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:05:20.791804  749897 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:05:20.791961  749897 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-906952] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:05:20.792023  749897 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:05:20.792107  749897 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:05:20.792171  749897 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:05:20.792256  749897 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:05:20.792344  749897 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:05:20.792415  749897 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:05:20.792476  749897 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:05:20.792539  749897 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:05:20.792605  749897 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:05:20.792697  749897 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:05:20.792748  749897 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:05:20.794023  749897 out.go:252]   - Booting up control plane ...
	I1025 10:05:20.794097  749897 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:05:20.794179  749897 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:05:20.794237  749897 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:05:20.794321  749897 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:05:20.794412  749897 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:05:20.794505  749897 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:05:20.794616  749897 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:05:20.794669  749897 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:05:20.794777  749897 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:05:20.794856  749897 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:05:20.794902  749897 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.040996ms
	I1025 10:05:20.795016  749897 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:05:20.795109  749897 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:05:20.795200  749897 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:05:20.795259  749897 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:05:20.795350  749897 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.57775331s
	I1025 10:05:20.795426  749897 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.801699129s
	I1025 10:05:20.795496  749897 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501714122s
	I1025 10:05:20.795615  749897 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:05:20.795749  749897 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:05:20.795796  749897 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:05:20.796005  749897 kubeadm.go:318] [mark-control-plane] Marking the node scheduled-stop-906952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:05:20.796050  749897 kubeadm.go:318] [bootstrap-token] Using token: 8l3n8e.fhl8tsf2mazspoa7
	I1025 10:05:20.800151  749897 out.go:252]   - Configuring RBAC rules ...
	I1025 10:05:20.800259  749897 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:05:20.800332  749897 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:05:20.800462  749897 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:05:20.800574  749897 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:05:20.800703  749897 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:05:20.800773  749897 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:05:20.800882  749897 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:05:20.800924  749897 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:05:20.800962  749897 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:05:20.800964  749897 kubeadm.go:318] 
	I1025 10:05:20.801013  749897 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:05:20.801015  749897 kubeadm.go:318] 
	I1025 10:05:20.801098  749897 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:05:20.801106  749897 kubeadm.go:318] 
	I1025 10:05:20.801126  749897 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:05:20.801190  749897 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:05:20.801234  749897 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:05:20.801237  749897 kubeadm.go:318] 
	I1025 10:05:20.801287  749897 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:05:20.801289  749897 kubeadm.go:318] 
	I1025 10:05:20.801325  749897 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:05:20.801327  749897 kubeadm.go:318] 
	I1025 10:05:20.801371  749897 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:05:20.801428  749897 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:05:20.801487  749897 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:05:20.801490  749897 kubeadm.go:318] 
	I1025 10:05:20.801557  749897 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:05:20.801644  749897 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:05:20.801651  749897 kubeadm.go:318] 
	I1025 10:05:20.801719  749897 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 8l3n8e.fhl8tsf2mazspoa7 \
	I1025 10:05:20.801806  749897 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c90a3b482422c1132c705eb6f8dc3664d0c29dd0e4f154a7770e9ff4c357ad9d \
	I1025 10:05:20.801824  749897 kubeadm.go:318] 	--control-plane 
	I1025 10:05:20.801827  749897 kubeadm.go:318] 
	I1025 10:05:20.801900  749897 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:05:20.801902  749897 kubeadm.go:318] 
	I1025 10:05:20.801978  749897 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 8l3n8e.fhl8tsf2mazspoa7 \
	I1025 10:05:20.802100  749897 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c90a3b482422c1132c705eb6f8dc3664d0c29dd0e4f154a7770e9ff4c357ad9d 
	I1025 10:05:20.802111  749897 cni.go:84] Creating CNI manager for ""
	I1025 10:05:20.802154  749897 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 10:05:20.804282  749897 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 10:05:20.805543  749897 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 10:05:20.814437  749897 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 10:05:20.827806  749897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:05:20.827942  749897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:05:20.827974  749897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-906952 minikube.k8s.io/updated_at=2025_10_25T10_05_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689 minikube.k8s.io/name=scheduled-stop-906952 minikube.k8s.io/primary=true
	I1025 10:05:20.837761  749897 ops.go:34] apiserver oom_adj: -16
	I1025 10:05:20.916103  749897 kubeadm.go:1113] duration metric: took 88.221416ms to wait for elevateKubeSystemPrivileges
	I1025 10:05:20.916130  749897 kubeadm.go:402] duration metric: took 9.69949737s to StartCluster
	I1025 10:05:20.916146  749897 settings.go:142] acquiring lock: {Name:mkcd1be1e8e86a0216701a7ffe40647298894af3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:20.916210  749897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 10:05:20.916834  749897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/kubeconfig: {Name:mkce2c8734c7bbe9f4385b3c0c646885305b640b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:05:20.917059  749897 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 10:05:20.917124  749897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:05:20.917149  749897 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:05:20.917240  749897 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-906952"
	I1025 10:05:20.917255  749897 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-906952"
	I1025 10:05:20.917262  749897 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-906952"
	I1025 10:05:20.917277  749897 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-906952"
	I1025 10:05:20.917291  749897 host.go:66] Checking if "scheduled-stop-906952" exists ...
	I1025 10:05:20.917305  749897 config.go:182] Loaded profile config "scheduled-stop-906952": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 10:05:20.917689  749897 cli_runner.go:164] Run: docker container inspect scheduled-stop-906952 --format={{.State.Status}}
	I1025 10:05:20.917853  749897 cli_runner.go:164] Run: docker container inspect scheduled-stop-906952 --format={{.State.Status}}
	I1025 10:05:20.918565  749897 out.go:179] * Verifying Kubernetes components...
	I1025 10:05:20.919865  749897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:05:20.940498  749897 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:05:20.940664  749897 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-906952"
	I1025 10:05:20.940700  749897 host.go:66] Checking if "scheduled-stop-906952" exists ...
	I1025 10:05:20.941152  749897 cli_runner.go:164] Run: docker container inspect scheduled-stop-906952 --format={{.State.Status}}
	I1025 10:05:20.941925  749897 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:05:20.941935  749897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:05:20.941983  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:20.969417  749897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa Username:docker}
	I1025 10:05:20.971802  749897 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:05:20.971815  749897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:05:20.971871  749897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-906952
	I1025 10:05:20.994924  749897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/scheduled-stop-906952/id_rsa Username:docker}
	I1025 10:05:21.004632  749897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:05:21.061408  749897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:05:21.089296  749897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:05:21.118261  749897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:05:21.205615  749897 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:05:21.206463  749897 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:05:21.206512  749897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:05:21.379256  749897 api_server.go:72] duration metric: took 462.17355ms to wait for apiserver process to appear ...
	I1025 10:05:21.379272  749897 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:05:21.379291  749897 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:05:21.383246  749897 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:05:21.384175  749897 api_server.go:141] control plane version: v1.34.1
	I1025 10:05:21.384191  749897 api_server.go:131] duration metric: took 4.914405ms to wait for apiserver health ...
	I1025 10:05:21.384199  749897 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:05:21.386625  749897 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:05:21.386702  749897 system_pods.go:59] 5 kube-system pods found
	I1025 10:05:21.386729  749897 system_pods.go:61] "etcd-scheduled-stop-906952" [db556dee-17f1-49d1-8f5a-2ae9cdc8f06b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:05:21.386734  749897 system_pods.go:61] "kube-apiserver-scheduled-stop-906952" [99d7094d-46e2-4c5c-be72-8939cce9e6ba] Running
	I1025 10:05:21.386741  749897 system_pods.go:61] "kube-controller-manager-scheduled-stop-906952" [74647223-73b5-4a24-812f-fbd5d6be3a00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:05:21.386745  749897 system_pods.go:61] "kube-scheduler-scheduled-stop-906952" [616a9d64-abf9-49c7-840b-b073b82df870] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:05:21.386748  749897 system_pods.go:61] "storage-provisioner" [8214d9bd-7045-41f8-b052-100e07e87b3d] Pending
	I1025 10:05:21.386752  749897 system_pods.go:74] duration metric: took 2.549249ms to wait for pod list to return data ...
	I1025 10:05:21.386760  749897 kubeadm.go:586] duration metric: took 469.6839ms to wait for: map[apiserver:true system_pods:true]
	I1025 10:05:21.386771  749897 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:05:21.387714  749897 addons.go:514] duration metric: took 470.56702ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:05:21.388903  749897 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1025 10:05:21.388923  749897 node_conditions.go:123] node cpu capacity is 8
	I1025 10:05:21.388934  749897 node_conditions.go:105] duration metric: took 2.160671ms to run NodePressure ...
	I1025 10:05:21.388945  749897 start.go:241] waiting for startup goroutines ...
	I1025 10:05:21.709734  749897 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-906952" context rescaled to 1 replicas
	I1025 10:05:21.709761  749897 start.go:246] waiting for cluster config update ...
	I1025 10:05:21.709771  749897 start.go:255] writing updated cluster config ...
	I1025 10:05:21.710052  749897 ssh_runner.go:195] Run: rm -f paused
	I1025 10:05:21.758864  749897 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1025 10:05:21.760815  749897 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-906952" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.107824871Z" level=info msg="Loading containers: done."
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.118853794Z" level=info msg="Docker daemon" commit=f8215cc containerd-snapshotter=false storage-driver=overlay2 version=28.5.1
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.118906778Z" level=info msg="Initializing buildkit"
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.137561996Z" level=info msg="Completed buildkit initialization"
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.143573993Z" level=info msg="Daemon has completed initialization"
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.143677979Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.143688749Z" level=info msg="API listen on [::]:2376"
	Oct 25 10:05:09 scheduled-stop-906952 dockerd[1058]: time="2025-10-25T10:05:09.143690469Z" level=info msg="API listen on /run/docker.sock"
	Oct 25 10:05:09 scheduled-stop-906952 systemd[1]: Started docker.service - Docker Application Container Engine.
	Oct 25 10:05:09 scheduled-stop-906952 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Start docker client with request timeout 0s"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Loaded network plugin cni"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Docker cri networking managed by network plugin cni"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Setting cgroupDriver systemd"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Oct 25 10:05:09 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:09Z" level=info msg="Start cri-dockerd grpc backend"
	Oct 25 10:05:09 scheduled-stop-906952 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Oct 25 10:05:16 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d9b0a87bd817fae07e8d6a5810407e8ea9c3e69f604c5326a5d986c7ff02f7bc/resolv.conf as [nameserver 192.168.76.1 search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Oct 25 10:05:16 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/80a32e014b64f7ea0e5b9a9bf8df9df12394e559ceb232127a42b60b05e0c4e9/resolv.conf as [nameserver 192.168.76.1 search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	Oct 25 10:05:16 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bec0df651739b3f2c314c6fdf1c7ba13f281c49ee1c10c5c70323d7de6cd2468/resolv.conf as [nameserver 192.168.76.1 search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	Oct 25 10:05:16 scheduled-stop-906952 cri-dockerd[1367]: time="2025-10-25T10:05:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ff108277ac7194cb5c625405905f521a4c4b35fc54da6eae2bbba659716aedd9/resolv.conf as [nameserver 192.168.76.1 search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	2450c047ecb77       7dd6aaa1717ab       7 seconds ago       Running             kube-scheduler            0                   ff108277ac719       kube-scheduler-scheduled-stop-906952            kube-system
	3746f5be0cd52       c80c8dbafe7dd       7 seconds ago       Running             kube-controller-manager   0                   80a32e014b64f       kube-controller-manager-scheduled-stop-906952   kube-system
	0feda05a251ed       5f1f5298c888d       7 seconds ago       Running             etcd                      0                   bec0df651739b       etcd-scheduled-stop-906952                      kube-system
	b6b5f689262f0       c3994bc696102       7 seconds ago       Running             kube-apiserver            0                   d9b0a87bd817f       kube-apiserver-scheduled-stop-906952            kube-system
	
	
	==> describe nodes <==
	Name:               scheduled-stop-906952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=scheduled-stop-906952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6017293569ff48e99407bb5ade8e9ba1a7a0c689
	                    minikube.k8s.io/name=scheduled-stop-906952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_05_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:05:17 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-906952
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:05:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:05:20 +0000   Sat, 25 Oct 2025 10:05:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:05:20 +0000   Sat, 25 Oct 2025 10:05:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:05:20 +0000   Sat, 25 Oct 2025 10:05:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:05:20 +0000   Sat, 25 Oct 2025 10:05:17 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-906952
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863364Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                5dfa295f-554c-4885-9a52-3a3493a00d6b
	  Boot ID:                    2fda8ac7-743b-4d90-8011-17dbcec8d3ad
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.5.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-906952                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3s
	  kube-system                 kube-apiserver-scheduled-stop-906952             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-controller-manager-scheduled-stop-906952    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-scheduled-stop-906952             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%)   0 (0%)
	  memory             100Mi (0%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node scheduled-stop-906952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node scheduled-stop-906952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node scheduled-stop-906952 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000015] ll header: 00000000: ff ff ff ff ff ff ca 69 af 92 55 8d 08 06
	[  +0.000498] IPv4: martian source 10.244.0.34 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 70 1e fe 24 24 08 06
	[  +0.000602] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 af 9c e7 7c 87 08 06
	[Oct25 09:28] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a6 cb c2 16 78 86 08 06
	[  +0.000817] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 f6 aa 45 b9 f4 08 06
	[Oct25 09:30] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 a3 a0 a4 aa f3 08 06
	[Oct25 09:31] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 d1 a1 fc a6 37 08 06
	[Oct25 09:53] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa 1a 0f bd 89 d4 08 06
	[  +0.003062] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ae a3 ea 0e 67 c6 08 06
	[Oct25 09:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 78 9a a4 f9 23 08 06
	[Oct25 10:03] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 a3 7a 98 22 35 08 06
	[  +0.000346] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 34 0e 6a b6 54 08 06
	[Oct25 10:04] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 3e 92 c6 94 60 08 06
	
	
	==> etcd [0feda05a251e] <==
	{"level":"warn","ts":"2025-10-25T10:05:17.046289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.053156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.059205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.068133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.074330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.081228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.088308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.102905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.109609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.116653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.126717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.133927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.140453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.146710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.153064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.161209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.168398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.180687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.188011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.194635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.200752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.215769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.222193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.228613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:05:17.276529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35298","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:05:23 up  1:47,  0 user,  load average: 2.32, 2.31, 2.01
	Linux scheduled-stop-906952 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [b6b5f689262f] <==
	I1025 10:05:17.728537       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:05:17.728545       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:05:17.728444       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 10:05:17.728612       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:05:17.729822       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:05:17.730342       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:05:17.732026       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:05:17.732352       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:05:17.738701       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:05:17.741228       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:05:17.743669       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:05:17.921907       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:05:18.633392       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:05:18.637238       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:05:18.637261       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:05:19.077946       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:05:19.114862       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:05:19.237674       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:05:19.243564       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:05:19.244634       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:05:19.248730       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:05:19.649152       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:05:20.189017       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:05:20.198745       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:05:20.206683       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [3746f5be0cd5] <==
	I1025 10:05:22.099137       1 controllermanager.go:781] "Started controller" controller="token-cleaner-controller"
	I1025 10:05:22.099219       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1025 10:05:22.099230       1 shared_informer.go:349] "Waiting for caches to sync" controller="token_cleaner"
	I1025 10:05:22.099237       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	I1025 10:05:22.353866       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1025 10:05:22.353905       1 controllermanager.go:781] "Started controller" controller="node-ipam-controller"
	I1025 10:05:22.353913       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="node-route-controller"
	I1025 10:05:22.354011       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1025 10:05:22.354029       1 shared_informer.go:349] "Waiting for caches to sync" controller="node"
	I1025 10:05:22.499125       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1025 10:05:22.499195       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1025 10:05:22.499208       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	I1025 10:05:22.648664       1 controllermanager.go:781] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1025 10:05:22.648754       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1025 10:05:22.648772       1 shared_informer.go:349] "Waiting for caches to sync" controller="PVC protection"
	I1025 10:05:22.799036       1 controllermanager.go:781] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1025 10:05:22.799067       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1025 10:05:22.799109       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1025 10:05:22.799118       1 shared_informer.go:349] "Waiting for caches to sync" controller="legacy-service-account-token-cleaner"
	I1025 10:05:22.949325       1 controllermanager.go:781] "Started controller" controller="endpointslice-controller"
	I1025 10:05:22.949494       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1025 10:05:22.949514       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice"
	I1025 10:05:23.099003       1 controllermanager.go:781] "Started controller" controller="replicationcontroller-controller"
	I1025 10:05:23.099115       1 replica_set.go:243] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1025 10:05:23.099127       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicationController"
	
	
	==> kube-scheduler [2450c047ecb7] <==
	E1025 10:05:17.674357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:05:17.673946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:05:17.673961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:05:17.673807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:05:17.673906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:05:17.674492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:05:17.674506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:05:17.674598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:05:17.674759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:05:17.674774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:05:17.674879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:05:17.674874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:05:17.674980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:05:18.500050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:05:18.522614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:05:18.538032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:05:18.567918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:05:18.592713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:05:18.625150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:05:18.658217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:05:18.662224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:05:18.830037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1025 10:05:18.870235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:05:18.925548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1025 10:05:21.971025       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.207986    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3aa8a8219ace2c77652961b35d5a43cf-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-906952\" (UID: \"3aa8a8219ace2c77652961b35d5a43cf\") " pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208037    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3aa8a8219ace2c77652961b35d5a43cf-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-906952\" (UID: \"3aa8a8219ace2c77652961b35d5a43cf\") " pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208068    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaadcee478231d19980bb74737564618-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-906952\" (UID: \"aaadcee478231d19980bb74737564618\") " pod="kube-system/kube-apiserver-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208095    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aaadcee478231d19980bb74737564618-k8s-certs\") pod \"kube-apiserver-scheduled-stop-906952\" (UID: \"aaadcee478231d19980bb74737564618\") " pod="kube-system/kube-apiserver-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208125    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3aa8a8219ace2c77652961b35d5a43cf-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-906952\" (UID: \"3aa8a8219ace2c77652961b35d5a43cf\") " pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208151    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c8441a351b5bc25f37d766bf08a65650-kubeconfig\") pod \"kube-scheduler-scheduled-stop-906952\" (UID: \"c8441a351b5bc25f37d766bf08a65650\") " pod="kube-system/kube-scheduler-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208219    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaadcee478231d19980bb74737564618-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-906952\" (UID: \"aaadcee478231d19980bb74737564618\") " pod="kube-system/kube-apiserver-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208260    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aaadcee478231d19980bb74737564618-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-906952\" (UID: \"aaadcee478231d19980bb74737564618\") " pod="kube-system/kube-apiserver-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208290    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3aa8a8219ace2c77652961b35d5a43cf-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-906952\" (UID: \"3aa8a8219ace2c77652961b35d5a43cf\") " pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208316    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3aa8a8219ace2c77652961b35d5a43cf-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-906952\" (UID: \"3aa8a8219ace2c77652961b35d5a43cf\") " pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.208395    2267 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3aa8a8219ace2c77652961b35d5a43cf-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-906952\" (UID: \"3aa8a8219ace2c77652961b35d5a43cf\") " pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:20 scheduled-stop-906952 kubelet[2267]: I1025 10:05:20.997663    2267 apiserver.go:52] "Watching apiserver"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.006353    2267 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.063775    2267 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.063872    2267 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.064089    2267 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.064160    2267 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: E1025 10:05:21.076413    2267 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-906952\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: E1025 10:05:21.076925    2267 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-906952\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: E1025 10:05:21.076974    2267 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-scheduled-stop-906952\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: E1025 10:05:21.078075    2267 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-906952\" already exists" pod="kube-system/etcd-scheduled-stop-906952"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.117047    2267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-906952" podStartSLOduration=1.117023177 podStartE2EDuration="1.117023177s" podCreationTimestamp="2025-10-25 10:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:05:21.102392965 +0000 UTC m=+1.164320262" watchObservedRunningTime="2025-10-25 10:05:21.117023177 +0000 UTC m=+1.178950453"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.129820    2267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-906952" podStartSLOduration=1.129793008 podStartE2EDuration="1.129793008s" podCreationTimestamp="2025-10-25 10:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:05:21.117288062 +0000 UTC m=+1.179215356" watchObservedRunningTime="2025-10-25 10:05:21.129793008 +0000 UTC m=+1.191720304"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.139786    2267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-906952" podStartSLOduration=1.139735409 podStartE2EDuration="1.139735409s" podCreationTimestamp="2025-10-25 10:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:05:21.130017475 +0000 UTC m=+1.191944760" watchObservedRunningTime="2025-10-25 10:05:21.139735409 +0000 UTC m=+1.201662705"
	Oct 25 10:05:21 scheduled-stop-906952 kubelet[2267]: I1025 10:05:21.139982    2267 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-906952" podStartSLOduration=1.139970474 podStartE2EDuration="1.139970474s" podCreationTimestamp="2025-10-25 10:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:05:21.139718508 +0000 UTC m=+1.201645804" watchObservedRunningTime="2025-10-25 10:05:21.139970474 +0000 UTC m=+1.201897771"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p scheduled-stop-906952 -n scheduled-stop-906952
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-906952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-906952 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-906952 describe pod storage-provisioner: exit status 1 (65.406484ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-906952 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-906952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-906952
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-906952: (2.228371557s)
--- FAIL: TestScheduledStopUnix (27.47s)

                                                
                                    

Test pass (316/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.63
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 2.94
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.43
21 TestBinaryMirror 0.85
22 TestOffline 75.19
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 138.26
29 TestAddons/serial/Volcano 39.03
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.52
35 TestAddons/parallel/Registry 15.21
36 TestAddons/parallel/RegistryCreds 0.64
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.62
41 TestAddons/parallel/CSI 50.07
42 TestAddons/parallel/Headlamp 16.41
43 TestAddons/parallel/CloudSpanner 5.49
44 TestAddons/parallel/LocalPath 51.59
45 TestAddons/parallel/NvidiaDevicePlugin 6.47
46 TestAddons/parallel/Yakd 10.64
47 TestAddons/parallel/AmdGpuDevicePlugin 5.45
48 TestAddons/StoppedEnableDisable 11.37
49 TestCertOptions 29.19
50 TestCertExpiration 244.94
51 TestDockerFlags 41.95
52 TestForceSystemdFlag 36.02
53 TestForceSystemdEnv 36.95
58 TestErrorSpam/setup 22.33
59 TestErrorSpam/start 0.69
60 TestErrorSpam/status 1
61 TestErrorSpam/pause 1.3
62 TestErrorSpam/unpause 1.36
63 TestErrorSpam/stop 11.1
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 66.49
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 50.51
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.24
75 TestFunctional/serial/CacheCmd/cache/add_local 0.77
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.4
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 49.25
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.07
86 TestFunctional/serial/LogsFileCmd 1.07
87 TestFunctional/serial/InvalidService 4.28
89 TestFunctional/parallel/ConfigCmd 0.52
91 TestFunctional/parallel/DryRun 0.41
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 7.53
98 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 1.95
104 TestFunctional/parallel/FileSync 0.3
105 TestFunctional/parallel/CertSync 1.83
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.31
113 TestFunctional/parallel/License 0.26
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.21
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/ServiceCmd/List 0.52
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
123 TestFunctional/parallel/ServiceCmd/Format 0.4
124 TestFunctional/parallel/ServiceCmd/URL 0.4
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
126 TestFunctional/parallel/MountCmd/any-port 6.9
127 TestFunctional/parallel/ProfileCmd/profile_list 0.44
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/Version/short 0.07
130 TestFunctional/parallel/Version/components 0.5
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
135 TestFunctional/parallel/ImageCommands/ImageBuild 2.81
136 TestFunctional/parallel/ImageCommands/Setup 0.41
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.93
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.95
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
144 TestFunctional/parallel/MountCmd/specific-port 2.05
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.04
146 TestFunctional/parallel/DockerEnv/bash 1.03
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 162.15
163 TestMultiControlPlane/serial/DeployApp 5.29
164 TestMultiControlPlane/serial/PingHostFromPods 1.28
165 TestMultiControlPlane/serial/AddWorkerNode 32.29
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
168 TestMultiControlPlane/serial/CopyFile 18.1
169 TestMultiControlPlane/serial/StopSecondaryNode 11.7
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
171 TestMultiControlPlane/serial/RestartSecondaryNode 38.34
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.01
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 165.23
174 TestMultiControlPlane/serial/DeleteSecondaryNode 9.44
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
176 TestMultiControlPlane/serial/StopCluster 32.59
177 TestMultiControlPlane/serial/RestartCluster 100.86
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.73
179 TestMultiControlPlane/serial/AddSecondaryNode 47.09
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.98
183 TestImageBuild/serial/Setup 23.31
184 TestImageBuild/serial/NormalBuild 1.16
185 TestImageBuild/serial/BuildWithBuildArg 0.7
186 TestImageBuild/serial/BuildWithDockerIgnore 0.51
187 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.51
191 TestJSONOutput/start/Command 61.5
192 TestJSONOutput/start/Audit 0
194 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/pause/Command 0.52
198 TestJSONOutput/pause/Audit 0
200 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/unpause/Command 0.49
204 TestJSONOutput/unpause/Audit 0
206 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/stop/Command 10.91
210 TestJSONOutput/stop/Audit 0
212 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
214 TestErrorJSONOutput 0.26
216 TestKicCustomNetwork/create_custom_network 24.13
217 TestKicCustomNetwork/use_default_bridge_network 22.44
218 TestKicExistingNetwork 27.69
219 TestKicCustomSubnet 24.2
220 TestKicStaticIP 25.31
221 TestMainNoArgs 0.06
222 TestMinikubeProfile 51.82
225 TestMountStart/serial/StartWithMountFirst 7.48
226 TestMountStart/serial/VerifyMountFirst 0.28
227 TestMountStart/serial/StartWithMountSecond 7.67
228 TestMountStart/serial/VerifyMountSecond 0.28
229 TestMountStart/serial/DeleteFirst 1.58
230 TestMountStart/serial/VerifyMountPostDelete 0.28
231 TestMountStart/serial/Stop 1.27
232 TestMountStart/serial/RestartStopped 8.37
233 TestMountStart/serial/VerifyMountPostStop 0.28
236 TestMultiNode/serial/FreshStart2Nodes 82.68
237 TestMultiNode/serial/DeployApp2Nodes 4.22
238 TestMultiNode/serial/PingHostFrom2Pods 0.85
239 TestMultiNode/serial/AddNode 31.72
240 TestMultiNode/serial/MultiNodeLabels 0.06
241 TestMultiNode/serial/ProfileList 0.69
242 TestMultiNode/serial/CopyFile 10.15
243 TestMultiNode/serial/StopNode 2.29
244 TestMultiNode/serial/StartAfterStop 9.27
245 TestMultiNode/serial/RestartKeepsNodes 72.23
246 TestMultiNode/serial/DeleteNode 5.38
247 TestMultiNode/serial/StopMultiNode 21.88
248 TestMultiNode/serial/RestartMultiNode 53.24
249 TestMultiNode/serial/ValidateNameConflict 30.04
254 TestPreload 105.81
257 TestSkaffold 76.39
259 TestInsufficientStorage 10.87
260 TestRunningBinaryUpgrade 50.95
262 TestKubernetesUpgrade 347.59
263 TestMissingContainerUpgrade 86.95
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
266 TestNoKubernetes/serial/StartWithK8s 35.66
267 TestNoKubernetes/serial/StartWithStopK8s 33.15
268 TestNoKubernetes/serial/Start 7.06
280 TestStoppedBinaryUpgrade/Setup 0.47
281 TestStoppedBinaryUpgrade/Upgrade 72.16
282 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
283 TestNoKubernetes/serial/ProfileList 1.39
284 TestNoKubernetes/serial/Stop 5.69
285 TestNoKubernetes/serial/StartNoArgs 9.5
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
296 TestPause/serial/Start 38.38
297 TestNetworkPlugins/group/auto/Start 68.77
298 TestPause/serial/SecondStartNoReconfiguration 53.06
299 TestNetworkPlugins/group/auto/KubeletFlags 0.3
300 TestNetworkPlugins/group/auto/NetCatPod 10.2
301 TestPause/serial/Pause 0.68
302 TestPause/serial/VerifyStatus 0.43
303 TestPause/serial/Unpause 0.61
304 TestPause/serial/PauseAgain 0.73
305 TestPause/serial/DeletePaused 2.5
306 TestPause/serial/VerifyDeletedResources 15.34
307 TestNetworkPlugins/group/kindnet/Start 49.22
308 TestNetworkPlugins/group/auto/DNS 0.14
309 TestNetworkPlugins/group/auto/Localhost 0.14
310 TestNetworkPlugins/group/auto/HairPin 0.13
311 TestNetworkPlugins/group/calico/Start 72.05
312 TestNetworkPlugins/group/custom-flannel/Start 39.48
313 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
315 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
318 TestNetworkPlugins/group/kindnet/DNS 0.14
319 TestNetworkPlugins/group/kindnet/Localhost 0.11
320 TestNetworkPlugins/group/kindnet/HairPin 0.11
321 TestNetworkPlugins/group/custom-flannel/DNS 0.14
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/false/Start 72.24
326 TestNetworkPlugins/group/calico/KubeletFlags 0.47
327 TestNetworkPlugins/group/calico/NetCatPod 10.89
328 TestNetworkPlugins/group/enable-default-cni/Start 38.56
329 TestNetworkPlugins/group/calico/DNS 0.16
330 TestNetworkPlugins/group/calico/Localhost 0.12
331 TestNetworkPlugins/group/calico/HairPin 0.13
332 TestNetworkPlugins/group/flannel/Start 44.79
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
338 TestNetworkPlugins/group/false/KubeletFlags 0.5
339 TestNetworkPlugins/group/false/NetCatPod 9.28
340 TestNetworkPlugins/group/bridge/Start 70.18
341 TestNetworkPlugins/group/false/DNS 0.15
342 TestNetworkPlugins/group/false/Localhost 0.12
343 TestNetworkPlugins/group/false/HairPin 0.12
344 TestNetworkPlugins/group/flannel/ControllerPod 6.01
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
346 TestNetworkPlugins/group/flannel/NetCatPod 10.21
347 TestNetworkPlugins/group/kubenet/Start 70.92
348 TestNetworkPlugins/group/flannel/DNS 0.14
349 TestNetworkPlugins/group/flannel/Localhost 0.13
350 TestNetworkPlugins/group/flannel/HairPin 0.15
352 TestStartStop/group/old-k8s-version/serial/FirstStart 78.83
354 TestStartStop/group/no-preload/serial/FirstStart 79.38
355 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
356 TestNetworkPlugins/group/bridge/NetCatPod 9.2
357 TestNetworkPlugins/group/bridge/DNS 0.16
358 TestNetworkPlugins/group/bridge/Localhost 0.14
359 TestNetworkPlugins/group/bridge/HairPin 0.14
360 TestNetworkPlugins/group/kubenet/KubeletFlags 0.35
361 TestNetworkPlugins/group/kubenet/NetCatPod 10.21
362 TestNetworkPlugins/group/kubenet/DNS 0.16
363 TestNetworkPlugins/group/kubenet/Localhost 0.2
364 TestNetworkPlugins/group/kubenet/HairPin 0.14
366 TestStartStop/group/embed-certs/serial/FirstStart 71.36
367 TestStartStop/group/old-k8s-version/serial/DeployApp 8.31
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
369 TestStartStop/group/old-k8s-version/serial/Stop 10.92
371 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 64.56
372 TestStartStop/group/no-preload/serial/DeployApp 9.27
373 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
374 TestStartStop/group/old-k8s-version/serial/SecondStart 45.73
375 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
376 TestStartStop/group/no-preload/serial/Stop 11.21
377 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
378 TestStartStop/group/no-preload/serial/SecondStart 50.39
379 TestStartStop/group/embed-certs/serial/DeployApp 8.27
380 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.81
382 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/embed-certs/serial/Stop 10.98
384 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
385 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
386 TestStartStop/group/old-k8s-version/serial/Pause 2.54
388 TestStartStop/group/newest-cni/serial/FirstStart 31.19
389 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
390 TestStartStop/group/embed-certs/serial/SecondStart 48.03
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.81
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.35
393 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
394 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
395 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
396 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.83
397 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
398 TestStartStop/group/no-preload/serial/Pause 2.93
399 TestStartStop/group/newest-cni/serial/DeployApp 0
400 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
401 TestStartStop/group/newest-cni/serial/Stop 11.02
402 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
403 TestStartStop/group/newest-cni/serial/SecondStart 12.43
404 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.08
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
409 TestStartStop/group/newest-cni/serial/Pause 2.52
410 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
411 TestStartStop/group/embed-certs/serial/Pause 2.44
412 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
413 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
414 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
415 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.45
x
+
TestDownloadOnly/v1.28.0/json-events (5.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-078670 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-078670 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.630225663s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 09:15:44.111765  503346 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1025 09:15:44.111856  503346 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-078670
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-078670: exit status 85 (76.958147ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-078670 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-078670 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:38
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:38.539201  503358 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:38.539518  503358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:38.539530  503358 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:38.539535  503358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:38.539810  503358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	W1025 09:15:38.540010  503358 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21767-499776/.minikube/config/config.json: open /home/jenkins/minikube-integration/21767-499776/.minikube/config/config.json: no such file or directory
	I1025 09:15:38.540575  503358 out.go:368] Setting JSON to true
	I1025 09:15:38.541619  503358 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3487,"bootTime":1761380252,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:38.541720  503358 start.go:141] virtualization: kvm guest
	I1025 09:15:38.543788  503358 out.go:99] [download-only-078670] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:38.543953  503358 notify.go:220] Checking for updates...
	W1025 09:15:38.543974  503358 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 09:15:38.545221  503358 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:15:38.546614  503358 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:38.548027  503358 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:15:38.549282  503358 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:15:38.550603  503358 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 09:15:38.552774  503358 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:15:38.553161  503358 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:38.578354  503358 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:38.578459  503358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:38.634452  503358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-25 09:15:38.624061128 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:38.634623  503358 docker.go:318] overlay module found
	I1025 09:15:38.636211  503358 out.go:99] Using the docker driver based on user configuration
	I1025 09:15:38.636253  503358 start.go:305] selected driver: docker
	I1025 09:15:38.636261  503358 start.go:925] validating driver "docker" against <nil>
	I1025 09:15:38.636357  503358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:38.694175  503358 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-25 09:15:38.683321646 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:38.694343  503358 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:15:38.694950  503358 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 09:15:38.695104  503358 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:15:38.696692  503358 out.go:171] Using Docker driver with root privileges
	I1025 09:15:38.697862  503358 cni.go:84] Creating CNI manager for ""
	I1025 09:15:38.697947  503358 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 09:15:38.697965  503358 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 09:15:38.698044  503358 start.go:349] cluster config:
	{Name:download-only-078670 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-078670 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:15:38.699865  503358 out.go:99] Starting "download-only-078670" primary control-plane node in "download-only-078670" cluster
	I1025 09:15:38.699894  503358 cache.go:123] Beginning downloading kic base image for docker with docker
	I1025 09:15:38.701040  503358 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:15:38.701076  503358 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1025 09:15:38.701191  503358 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:15:38.719800  503358 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:15:38.720104  503358 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:15:38.720218  503358 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:15:38.722772  503358 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1025 09:15:38.722805  503358 cache.go:58] Caching tarball of preloaded images
	I1025 09:15:38.722930  503358 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1025 09:15:38.724481  503358 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 09:15:38.724498  503358 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1025 09:15:38.749835  503358 preload.go:290] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1025 09:15:38.749964  503358 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1025 09:15:41.094009  503358 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on docker
	I1025 09:15:41.094410  503358 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/download-only-078670/config.json ...
	I1025 09:15:41.094441  503358 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/download-only-078670/config.json: {Name:mke885e412eaf36c910a98ac5e2b7281ec2aa040 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:15:41.094641  503358 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1025 09:15:41.094821  503358 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21767-499776/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-078670 host does not exist
	  To start a cluster, run: "minikube start -p download-only-078670"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-078670
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (2.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-049605 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-049605 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (2.942810893s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (2.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 09:15:47.527303  503346 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime docker
I1025 09:15:47.527346  503346 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-499776/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-049605
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-049605: exit status 85 (75.399437ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-078670 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-078670 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ delete  │ -p download-only-078670                                                                                                                                                       │ download-only-078670 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │ 25 Oct 25 09:15 UTC │
	│ start   │ -o=json --download-only -p download-only-049605 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-049605 │ jenkins │ v1.37.0 │ 25 Oct 25 09:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:15:44
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:15:44.639842  503723 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:15:44.640097  503723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:44.640106  503723 out.go:374] Setting ErrFile to fd 2...
	I1025 09:15:44.640110  503723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:15:44.640317  503723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:15:44.640803  503723 out.go:368] Setting JSON to true
	I1025 09:15:44.641791  503723 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":3493,"bootTime":1761380252,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:15:44.641846  503723 start.go:141] virtualization: kvm guest
	I1025 09:15:44.643685  503723 out.go:99] [download-only-049605] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:15:44.643851  503723 notify.go:220] Checking for updates...
	I1025 09:15:44.645001  503723 out.go:171] MINIKUBE_LOCATION=21767
	I1025 09:15:44.646557  503723 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:15:44.647768  503723 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:15:44.649028  503723 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:15:44.650282  503723 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 09:15:44.652326  503723 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:15:44.652606  503723 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:15:44.676386  503723 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:15:44.676540  503723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:44.733043  503723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-25 09:15:44.723253283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:44.733163  503723 docker.go:318] overlay module found
	I1025 09:15:44.734971  503723 out.go:99] Using the docker driver based on user configuration
	I1025 09:15:44.735011  503723 start.go:305] selected driver: docker
	I1025 09:15:44.735025  503723 start.go:925] validating driver "docker" against <nil>
	I1025 09:15:44.735115  503723 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:15:44.795442  503723 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-25 09:15:44.785495737 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:15:44.795678  503723 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:15:44.796388  503723 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1025 09:15:44.796639  503723 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:15:44.798574  503723 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-049605 host does not exist
	  To start a cluster, run: "minikube start -p download-only-049605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-049605
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-718888 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-718888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-718888
--- PASS: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestBinaryMirror (0.85s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 09:15:48.718883  503346 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-145493 --alsologtostderr --binary-mirror http://127.0.0.1:36883 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-145493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-145493
--- PASS: TestBinaryMirror (0.85s)

                                                
                                    
x
+
TestOffline (75.19s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-637697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-637697 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m12.828344943s)
helpers_test.go:175: Cleaning up "offline-docker-637697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-637697
E1025 10:08:07.897443  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-637697: (2.360307626s)
--- PASS: TestOffline (75.19s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-456159
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-456159: exit status 85 (68.583009ms)

                                                
                                                
-- stdout --
	* Profile "addons-456159" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-456159"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-456159
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-456159: exit status 85 (72.57418ms)

                                                
                                                
-- stdout --
	* Profile "addons-456159" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-456159"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (138.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-456159 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-456159 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m18.255112117s)
--- PASS: TestAddons/Setup (138.26s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.03s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 13.671469ms
addons_test.go:868: volcano-scheduler stabilized in 13.716232ms
addons_test.go:884: volcano-controller stabilized in 13.798897ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-8tr7k" [ec595a0e-a1eb-4f06-8cb1-280d970865b8] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004193221s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-krnbj" [e3d0f1e7-594a-4e87-85fe-cc83787fdbfb] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004381284s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-cgwdd" [9f6161ef-57f5-47b1-bd06-3da4e3883ec7] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003854926s
addons_test.go:903: (dbg) Run:  kubectl --context addons-456159 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-456159 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-456159 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [8aa0c861-97e8-467c-9b0e-4b2df23676ac] Pending
helpers_test.go:352: "test-job-nginx-0" [8aa0c861-97e8-467c-9b0e-4b2df23676ac] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [8aa0c861-97e8-467c-9b0e-4b2df23676ac] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003850956s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-456159 addons disable volcano --alsologtostderr -v=1: (11.677192095s)
--- PASS: TestAddons/serial/Volcano (39.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-456159 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-456159 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-456159 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-456159 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [85ae16d1-2604-43df-9036-c07444acf50b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [85ae16d1-2604-43df-9036-c07444acf50b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004096578s
addons_test.go:694: (dbg) Run:  kubectl --context addons-456159 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-456159 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-456159 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.017003ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-klmzr" [a487d49f-b5d9-45e5-aaaf-07dd3d13040f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00280047s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cdxgf" [1d20d86d-607e-4d57-ae6c-05bfab83d2da] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004077719s
addons_test.go:392: (dbg) Run:  kubectl --context addons-456159 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-456159 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-456159 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.437339335s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.21s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.533954ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-456159
addons_test.go:332: (dbg) Run:  kubectl --context addons-456159 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-lr8sb" [463c28ed-451a-4830-9a87-d372fcf4314b] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003927068s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.39201ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-vt5bt" [05632dd8-d543-4134-9c4c-4ffcab0f110a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003908341s
addons_test.go:463: (dbg) Run:  kubectl --context addons-456159 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 09:19:11.925767  503346 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 09:19:11.929311  503346 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 09:19:11.929351  503346 kapi.go:107] duration metric: took 3.606605ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.620833ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-456159 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/10/25 09:19:20 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-456159 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [bf344a8c-fd31-4ec4-a575-3b8c366faad7] Pending
helpers_test.go:352: "task-pv-pod" [bf344a8c-fd31-4ec4-a575-3b8c366faad7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [bf344a8c-fd31-4ec4-a575-3b8c366faad7] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003869459s
addons_test.go:572: (dbg) Run:  kubectl --context addons-456159 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-456159 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-456159 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-456159 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-456159 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-456159 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-456159 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [995891fb-ce68-47ac-b654-5fc0b58e7711] Pending
helpers_test.go:352: "task-pv-pod-restore" [995891fb-ce68-47ac-b654-5fc0b58e7711] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003258981s
addons_test.go:614: (dbg) Run:  kubectl --context addons-456159 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-456159 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-456159 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-456159 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.547648759s)
--- PASS: TestAddons/parallel/CSI (50.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-456159 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-ft7ph" [f95628d0-70ab-459c-ab20-86e493efe45c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-ft7ph" [f95628d0-70ab-459c-ab20-86e493efe45c] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004335279s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-456159 addons disable headlamp --alsologtostderr -v=1: (5.645105493s)
--- PASS: TestAddons/parallel/Headlamp (16.41s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-dk9vc" [a271da33-50ab-463a-a472-d629d442ecaa] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003544444s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.59s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-456159 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-456159 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-456159 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b3083b76-a582-4693-9491-a0f1d0c44e45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b3083b76-a582-4693-9491-a0f1d0c44e45] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b3083b76-a582-4693-9491-a0f1d0c44e45] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003612961s
addons_test.go:967: (dbg) Run:  kubectl --context addons-456159 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 ssh "cat /opt/local-path-provisioner/pvc-b94f5b89-0c64-4b51-b2a3-1c6e15972da1_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-456159 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-456159 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-456159 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.696421548s)
--- PASS: TestAddons/parallel/LocalPath (51.59s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-62v67" [ac505d8c-525c-4cdb-b892-7dc4dbd2c0c9] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003612182s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-m48t2" [8039fe86-7bd1-47d5-b940-f9637b02b766] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003432696s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-456159 addons disable yakd --alsologtostderr -v=1: (5.639150221s)
--- PASS: TestAddons/parallel/Yakd (10.64s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-c9fx7" [531adaeb-818e-44af-a844-595c0764db21] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003082604s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-456159 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-456159
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-456159: (11.060037079s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-456159
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-456159
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-456159
--- PASS: TestAddons/StoppedEnableDisable (11.37s)

                                                
                                    
x
+
TestCertOptions (29.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-450622 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-450622 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (26.160217525s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-450622 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-450622 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-450622 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-450622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-450622
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-450622: (2.288821036s)
--- PASS: TestCertOptions (29.19s)

                                                
                                    
x
+
TestCertExpiration (244.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-652303 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-652303 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (27.26594048s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-652303 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-652303 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (35.398669684s)
helpers_test.go:175: Cleaning up "cert-expiration-652303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-652303
E1025 10:11:33.396887  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-652303: (2.278594105s)
--- PASS: TestCertExpiration (244.94s)

                                                
                                    
x
+
TestDockerFlags (41.95s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-813148 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-813148 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.956777383s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-813148 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-813148 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-813148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-813148
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-813148: (3.166872861s)
--- PASS: TestDockerFlags (41.95s)

                                                
                                    
x
+
TestForceSystemdFlag (36.02s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-686344 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-686344 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.067747511s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-686344 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-686344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-686344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-686344: (2.497040376s)
--- PASS: TestForceSystemdFlag (36.02s)

                                                
                                    
x
+
TestForceSystemdEnv (36.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-334996 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-334996 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.197508946s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-334996 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-334996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-334996
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-334996: (2.382052582s)
--- PASS: TestForceSystemdEnv (36.95s)

                                                
                                    
x
+
TestErrorSpam/setup (22.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-225622 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-225622 --driver=docker  --container-runtime=docker
E1025 09:28:07.905847  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:07.912286  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:07.923733  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:07.945221  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:07.986691  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:08.068260  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:08.229867  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:08.551368  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:09.193427  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:10.474901  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:28:13.037845  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-225622 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-225622 --driver=docker  --container-runtime=docker: (22.32864707s)
--- PASS: TestErrorSpam/setup (22.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 pause
--- PASS: TestErrorSpam/pause (1.30s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 unpause
E1025 09:28:18.159867  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (11.1s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 stop
E1025 09:28:28.401412  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 stop: (10.880216487s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-225622 --log_dir /tmp/nospam-225622 stop
--- PASS: TestErrorSpam/stop (11.10s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21767-499776/.minikube/files/etc/test/nested/copy/503346/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-013051 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E1025 09:28:48.882784  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:29:29.844724  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-013051 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m6.487503249s)
--- PASS: TestFunctional/serial/StartWithProxy (66.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (50.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 09:29:37.893035  503346 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-013051 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-013051 --alsologtostderr -v=8: (50.505099424s)
functional_test.go:678: soft start took 50.505983454s for "functional-013051" cluster.
I1025 09:30:28.398665  503346 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (50.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-013051 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-013051 /tmp/TestFunctionalserialCacheCmdcacheadd_local875351414/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cache add minikube-local-cache-test:functional-013051
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cache delete minikube-local-cache-test:functional-013051
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-013051
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.972211ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 kubectl -- --context functional-013051 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-013051 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-013051 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 09:30:51.768839  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-013051 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.244599722s)
functional_test.go:776: restart took 49.244971717s for "functional-013051" cluster.
I1025 09:31:22.979068  503346 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (49.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-013051 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-013051 logs: (1.068763451s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 logs --file /tmp/TestFunctionalserialLogsFileCmd2041439298/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-013051 logs --file /tmp/TestFunctionalserialLogsFileCmd2041439298/001/logs.txt: (1.066394577s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-013051 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-013051
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-013051: exit status 115 (364.274736ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30994 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-013051 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 config get cpus: exit status 14 (95.275815ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 config get cpus: exit status 14 (106.643373ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-013051 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-013051 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (183.177135ms)

                                                
                                                
-- stdout --
	* [functional-013051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:31:41.977971  561124 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:31:41.978265  561124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:41.978276  561124 out.go:374] Setting ErrFile to fd 2...
	I1025 09:31:41.978280  561124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:41.978545  561124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:31:41.979072  561124 out.go:368] Setting JSON to false
	I1025 09:31:41.980214  561124 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4450,"bootTime":1761380252,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:31:41.980324  561124 start.go:141] virtualization: kvm guest
	I1025 09:31:41.982321  561124 out.go:179] * [functional-013051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1025 09:31:41.983810  561124 notify.go:220] Checking for updates...
	I1025 09:31:41.983833  561124 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:31:41.985532  561124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:31:41.987329  561124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:31:41.988450  561124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:31:41.989613  561124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:31:41.990762  561124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:31:41.992350  561124 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:31:41.992887  561124 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:31:42.023508  561124 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:31:42.023720  561124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:31:42.084847  561124 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:31:42.07335459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:31:42.084969  561124 docker.go:318] overlay module found
	I1025 09:31:42.086787  561124 out.go:179] * Using the docker driver based on existing profile
	I1025 09:31:42.088061  561124 start.go:305] selected driver: docker
	I1025 09:31:42.088082  561124 start.go:925] validating driver "docker" against &{Name:functional-013051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-013051 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:31:42.088209  561124 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:31:42.089889  561124 out.go:203] 
	W1025 09:31:42.091038  561124 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 09:31:42.092197  561124 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-013051 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-013051 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-013051 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (191.441645ms)

                                                
                                                
-- stdout --
	* [functional-013051] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:31:47.446424  562892 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:31:47.446775  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.446788  562892 out.go:374] Setting ErrFile to fd 2...
	I1025 09:31:47.446793  562892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:31:47.447140  562892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:31:47.447920  562892 out.go:368] Setting JSON to false
	I1025 09:31:47.449085  562892 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":4455,"bootTime":1761380252,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 09:31:47.449161  562892 start.go:141] virtualization: kvm guest
	I1025 09:31:47.450767  562892 out.go:179] * [functional-013051] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1025 09:31:47.452422  562892 out.go:179]   - MINIKUBE_LOCATION=21767
	I1025 09:31:47.452455  562892 notify.go:220] Checking for updates...
	I1025 09:31:47.455160  562892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:31:47.456673  562892 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	I1025 09:31:47.458138  562892 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	I1025 09:31:47.459719  562892 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 09:31:47.461132  562892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:31:47.463069  562892 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:31:47.463887  562892 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:31:47.489930  562892 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1025 09:31:47.490043  562892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:31:47.549908  562892 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-25 09:31:47.539959558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:31:47.550038  562892 docker.go:318] overlay module found
	I1025 09:31:47.551897  562892 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:31:47.553183  562892 start.go:305] selected driver: docker
	I1025 09:31:47.553200  562892 start.go:925] validating driver "docker" against &{Name:functional-013051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-013051 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:31:47.553297  562892 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:31:47.555201  562892 out.go:203] 
	W1025 09:31:47.556645  562892 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:31:47.558026  562892 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-013051 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-013051 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-tpmn9" [dd3c9993-86af-4ee8-a887-a176366aabea] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-tpmn9" [dd3c9993-86af-4ee8-a887-a176366aabea] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003111373s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31580
functional_test.go:1680: http://192.168.49.2:31580: success! body:
Request served by hello-node-connect-7d85dfc575-tpmn9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31580
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh -n functional-013051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cp functional-013051:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1484240294/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh -n functional-013051 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh -n functional-013051 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/503346/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo cat /etc/test/nested/copy/503346/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/503346.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo cat /etc/ssl/certs/503346.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/503346.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo cat /usr/share/ca-certificates/503346.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5033462.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo cat /etc/ssl/certs/5033462.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5033462.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo cat /usr/share/ca-certificates/5033462.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-013051 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 ssh "sudo systemctl is-active crio": exit status 1 (308.110289ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-013051 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-013051 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-rf646" [abcd60b6-8f7d-4ef5-8307-b6e7ac643578] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-rf646" [abcd60b6-8f7d-4ef5-8307-b6e7ac643578] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003538875s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-013051 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-013051 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-013051 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 557934: os: process already finished
helpers_test.go:525: unable to kill pid 557559: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-013051 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-013051 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 service list -o json
functional_test.go:1504: Took "524.734174ms" to run "out/minikube-linux-amd64 -p functional-013051 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32190
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32190
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdany-port4208635289/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761384700180859698" to /tmp/TestFunctionalparallelMountCmdany-port4208635289/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761384700180859698" to /tmp/TestFunctionalparallelMountCmdany-port4208635289/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761384700180859698" to /tmp/TestFunctionalparallelMountCmdany-port4208635289/001/test-1761384700180859698
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.140011ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:31:40.508383  503346 retry.go:31] will retry after 436.931988ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 09:31 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 09:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 09:31 test-1761384700180859698
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh cat /mount-9p/test-1761384700180859698
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-013051 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [bb7a3966-b4da-4039-a7a4-2f414ed842bb] Pending
helpers_test.go:352: "busybox-mount" [bb7a3966-b4da-4039-a7a4-2f414ed842bb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [bb7a3966-b4da-4039-a7a4-2f414ed842bb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [bb7a3966-b4da-4039-a7a4-2f414ed842bb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004585435s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-013051 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdany-port4208635289/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.90s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "371.901939ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.090521ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "367.624259ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "69.717454ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-013051 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-013051
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-013051
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-013051 image ls --format short --alsologtostderr:
I1025 09:36:52.077632  569431 out.go:360] Setting OutFile to fd 1 ...
I1025 09:36:52.077953  569431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:52.077963  569431 out.go:374] Setting ErrFile to fd 2...
I1025 09:36:52.077967  569431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:52.078170  569431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
I1025 09:36:52.078901  569431 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:52.079009  569431 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:52.079423  569431 cli_runner.go:164] Run: docker container inspect functional-013051 --format={{.State.Status}}
I1025 09:36:52.098676  569431 ssh_runner.go:195] Run: systemctl --version
I1025 09:36:52.098724  569431 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013051
I1025 09:36:52.116996  569431 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/functional-013051/id_rsa Username:docker}
I1025 09:36:52.219299  569431 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-013051 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1           │ c3994bc696102 │ 88MB   │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-013051 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ localhost/my-image                          │ functional-013051 │ 0ead72bd6a827 │ 1.24MB │
│ docker.io/library/minikube-local-cache-test │ functional-013051 │ 2a0806261b0d9 │ 30B    │
│ registry.k8s.io/kube-scheduler              │ v1.34.1           │ 7dd6aaa1717ab │ 52.8MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1           │ fc25172553d79 │ 71.9MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1           │ c80c8dbafe7dd │ 74.9MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-013051 image ls --format table --alsologtostderr:
I1025 09:36:55.580319  569952 out.go:360] Setting OutFile to fd 1 ...
I1025 09:36:55.580432  569952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:55.580440  569952 out.go:374] Setting ErrFile to fd 2...
I1025 09:36:55.580444  569952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:55.580675  569952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
I1025 09:36:55.581279  569952 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:55.581368  569952 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:55.581793  569952 cli_runner.go:164] Run: docker container inspect functional-013051 --format={{.State.Status}}
I1025 09:36:55.599895  569952 ssh_runner.go:195] Run: systemctl --version
I1025 09:36:55.599957  569952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013051
I1025 09:36:55.619522  569952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/functional-013051/id_rsa Username:docker}
I1025 09:36:55.720906  569952 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-013051 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"2a0806261b0d90e60fb2ea0b5ec14114b9ba93604b46ef871de2c740d35cb655","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-013051"],"size":"30"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-013051","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"0ead72bd6a827877292a6a9eade641981dd6d069ef7159d36e78bc2367ec49b7","repoDigests":[],"repoTags":["localhost/my-image:functional-013051"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id"
:"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"52800000"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"88000000"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"71900000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146e
e4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"74900000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-013051 image ls --format json --alsologtostderr:
I1025 09:36:55.350679  569900 out.go:360] Setting OutFile to fd 1 ...
I1025 09:36:55.350935  569900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:55.350943  569900 out.go:374] Setting ErrFile to fd 2...
I1025 09:36:55.350946  569900 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:55.351339  569900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
I1025 09:36:55.351950  569900 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:55.352045  569900 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:55.352414  569900 cli_runner.go:164] Run: docker container inspect functional-013051 --format={{.State.Status}}
I1025 09:36:55.370946  569900 ssh_runner.go:195] Run: systemctl --version
I1025 09:36:55.370995  569900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013051
I1025 09:36:55.389289  569900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/functional-013051/id_rsa Username:docker}
I1025 09:36:55.489637  569900 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-013051 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-013051
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 2a0806261b0d90e60fb2ea0b5ec14114b9ba93604b46ef871de2c740d35cb655
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-013051
size: "30"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "52800000"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "88000000"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "74900000"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "71900000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-013051 image ls --format yaml --alsologtostderr:
I1025 09:36:52.312693  569481 out.go:360] Setting OutFile to fd 1 ...
I1025 09:36:52.312952  569481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:52.312962  569481 out.go:374] Setting ErrFile to fd 2...
I1025 09:36:52.312965  569481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:52.313149  569481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
I1025 09:36:52.313793  569481 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:52.313888  569481 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:52.314311  569481 cli_runner.go:164] Run: docker container inspect functional-013051 --format={{.State.Status}}
I1025 09:36:52.332951  569481 ssh_runner.go:195] Run: systemctl --version
I1025 09:36:52.333002  569481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013051
I1025 09:36:52.352012  569481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/functional-013051/id_rsa Username:docker}
I1025 09:36:52.453001  569481 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 ssh pgrep buildkitd: exit status 1 (290.444409ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image build -t localhost/my-image:functional-013051 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-013051 image build -t localhost/my-image:functional-013051 testdata/build --alsologtostderr: (2.284086371s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-013051 image build -t localhost/my-image:functional-013051 testdata/build --alsologtostderr:
I1025 09:36:52.833939  569652 out.go:360] Setting OutFile to fd 1 ...
I1025 09:36:52.834195  569652 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:52.834204  569652 out.go:374] Setting ErrFile to fd 2...
I1025 09:36:52.834207  569652 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:36:52.834426  569652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
I1025 09:36:52.835075  569652 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:52.835868  569652 config.go:182] Loaded profile config "functional-013051": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
I1025 09:36:52.836249  569652 cli_runner.go:164] Run: docker container inspect functional-013051 --format={{.State.Status}}
I1025 09:36:52.856518  569652 ssh_runner.go:195] Run: systemctl --version
I1025 09:36:52.856569  569652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-013051
I1025 09:36:52.875175  569652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/functional-013051/id_rsa Username:docker}
I1025 09:36:52.975525  569652 build_images.go:161] Building image from path: /tmp/build.3993947821.tar
I1025 09:36:52.975618  569652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 09:36:52.984367  569652 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3993947821.tar
I1025 09:36:52.988494  569652 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3993947821.tar: stat -c "%s %y" /var/lib/minikube/build/build.3993947821.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3993947821.tar': No such file or directory
I1025 09:36:52.988530  569652 ssh_runner.go:362] scp /tmp/build.3993947821.tar --> /var/lib/minikube/build/build.3993947821.tar (3072 bytes)
I1025 09:36:53.007607  569652 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3993947821
I1025 09:36:53.016208  569652 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3993947821 -xf /var/lib/minikube/build/build.3993947821.tar
I1025 09:36:53.025255  569652 docker.go:361] Building image: /var/lib/minikube/build/build.3993947821
I1025 09:36:53.025346  569652 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-013051 /var/lib/minikube/build/build.3993947821
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:0ead72bd6a827877292a6a9eade641981dd6d069ef7159d36e78bc2367ec49b7 done
#8 naming to localhost/my-image:functional-013051 done
#8 DONE 0.0s
I1025 09:36:55.034285  569652 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-013051 /var/lib/minikube/build/build.3993947821: (2.008903502s)
I1025 09:36:55.034377  569652 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3993947821
I1025 09:36:55.043208  569652 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3993947821.tar
I1025 09:36:55.051666  569652 build_images.go:217] Built localhost/my-image:functional-013051 from /tmp/build.3993947821.tar
I1025 09:36:55.051704  569652 build_images.go:133] succeeded building to: functional-013051
I1025 09:36:55.051711  569652 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-013051
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image load --daemon kicbase/echo-server:functional-013051 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image load --daemon kicbase/echo-server:functional-013051 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-013051
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image load --daemon kicbase/echo-server:functional-013051 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image save kicbase/echo-server:functional-013051 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image rm kicbase/echo-server:functional-013051 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-013051
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 image save --daemon kicbase/echo-server:functional-013051 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-013051
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdspecific-port334459019/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.060372ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:31:47.410232  503346 retry.go:31] will retry after 619.580627ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdspecific-port334459019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 ssh "sudo umount -f /mount-9p": exit status 1 (299.800098ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-013051 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdspecific-port334459019/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T" /mount1: exit status 1 (391.239833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:31:49.525342  503346 retry.go:31] will retry after 660.122692ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-013051 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-013051 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1428527505/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-013051 docker-env) && out/minikube-linux-amd64 status -p functional-013051"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-013051 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-013051 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-013051 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-013051
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-013051
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-013051
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (162.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1025 09:43:07.897515  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:44:30.972898  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m41.38681907s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (162.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 kubectl -- rollout status deployment/busybox: (2.983911652s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-jznjg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-mtv2f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-ttgnn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-jznjg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-mtv2f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-ttgnn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-jznjg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-mtv2f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-ttgnn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-jznjg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-jznjg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-mtv2f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-mtv2f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-ttgnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 kubectl -- exec busybox-7b57f96db7-ttgnn -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 node add --alsologtostderr -v 5: (31.364874254s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-496112 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp testdata/cp-test.txt ha-496112:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3376096766/001/cp-test_ha-496112.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112:/home/docker/cp-test.txt ha-496112-m02:/home/docker/cp-test_ha-496112_ha-496112-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test_ha-496112_ha-496112-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112:/home/docker/cp-test.txt ha-496112-m03:/home/docker/cp-test_ha-496112_ha-496112-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test_ha-496112_ha-496112-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112:/home/docker/cp-test.txt ha-496112-m04:/home/docker/cp-test_ha-496112_ha-496112-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test_ha-496112_ha-496112-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp testdata/cp-test.txt ha-496112-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3376096766/001/cp-test_ha-496112-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m02:/home/docker/cp-test.txt ha-496112:/home/docker/cp-test_ha-496112-m02_ha-496112.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test_ha-496112-m02_ha-496112.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m02:/home/docker/cp-test.txt ha-496112-m03:/home/docker/cp-test_ha-496112-m02_ha-496112-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test_ha-496112-m02_ha-496112-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m02:/home/docker/cp-test.txt ha-496112-m04:/home/docker/cp-test_ha-496112-m02_ha-496112-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test_ha-496112-m02_ha-496112-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp testdata/cp-test.txt ha-496112-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3376096766/001/cp-test_ha-496112-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m03:/home/docker/cp-test.txt ha-496112:/home/docker/cp-test_ha-496112-m03_ha-496112.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test_ha-496112-m03_ha-496112.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m03:/home/docker/cp-test.txt ha-496112-m02:/home/docker/cp-test_ha-496112-m03_ha-496112-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test_ha-496112-m03_ha-496112-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m03:/home/docker/cp-test.txt ha-496112-m04:/home/docker/cp-test_ha-496112-m03_ha-496112-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test_ha-496112-m03_ha-496112-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp testdata/cp-test.txt ha-496112-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3376096766/001/cp-test_ha-496112-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m04:/home/docker/cp-test.txt ha-496112:/home/docker/cp-test_ha-496112-m04_ha-496112.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112 "sudo cat /home/docker/cp-test_ha-496112-m04_ha-496112.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m04:/home/docker/cp-test.txt ha-496112-m02:/home/docker/cp-test_ha-496112-m04_ha-496112-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m02 "sudo cat /home/docker/cp-test_ha-496112-m04_ha-496112-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 cp ha-496112-m04:/home/docker/cp-test.txt ha-496112-m03:/home/docker/cp-test_ha-496112-m04_ha-496112-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 ssh -n ha-496112-m03 "sudo cat /home/docker/cp-test_ha-496112-m04_ha-496112-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 node stop m02 --alsologtostderr -v 5: (10.94771769s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5: exit status 7 (753.125709ms)

                                                
                                                
-- stdout --
	ha-496112
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-496112-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-496112-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-496112-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:45:48.541495  600107 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:45:48.541825  600107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:45:48.541837  600107 out.go:374] Setting ErrFile to fd 2...
	I1025 09:45:48.541841  600107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:45:48.542101  600107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:45:48.542342  600107 out.go:368] Setting JSON to false
	I1025 09:45:48.542378  600107 mustload.go:65] Loading cluster: ha-496112
	I1025 09:45:48.542507  600107 notify.go:220] Checking for updates...
	I1025 09:45:48.542844  600107 config.go:182] Loaded profile config "ha-496112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:45:48.542864  600107 status.go:174] checking status of ha-496112 ...
	I1025 09:45:48.544663  600107 cli_runner.go:164] Run: docker container inspect ha-496112 --format={{.State.Status}}
	I1025 09:45:48.564210  600107 status.go:371] ha-496112 host status = "Running" (err=<nil>)
	I1025 09:45:48.564261  600107 host.go:66] Checking if "ha-496112" exists ...
	I1025 09:45:48.564674  600107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-496112
	I1025 09:45:48.583333  600107 host.go:66] Checking if "ha-496112" exists ...
	I1025 09:45:48.583705  600107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:45:48.583769  600107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-496112
	I1025 09:45:48.603111  600107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/ha-496112/id_rsa Username:docker}
	I1025 09:45:48.702214  600107 ssh_runner.go:195] Run: systemctl --version
	I1025 09:45:48.708445  600107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:45:48.721060  600107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:45:48.780836  600107 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-25 09:45:48.769352607 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:45:48.781326  600107 kubeconfig.go:125] found "ha-496112" server: "https://192.168.49.254:8443"
	I1025 09:45:48.781352  600107 api_server.go:166] Checking apiserver status ...
	I1025 09:45:48.781392  600107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:45:48.795836  600107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2209/cgroup
	W1025 09:45:48.806089  600107 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2209/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:45:48.806152  600107 ssh_runner.go:195] Run: ls
	I1025 09:45:48.810692  600107 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:45:48.816241  600107 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:45:48.816274  600107 status.go:463] ha-496112 apiserver status = Running (err=<nil>)
	I1025 09:45:48.816291  600107 status.go:176] ha-496112 status: &{Name:ha-496112 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:45:48.816308  600107 status.go:174] checking status of ha-496112-m02 ...
	I1025 09:45:48.816674  600107 cli_runner.go:164] Run: docker container inspect ha-496112-m02 --format={{.State.Status}}
	I1025 09:45:48.839074  600107 status.go:371] ha-496112-m02 host status = "Stopped" (err=<nil>)
	I1025 09:45:48.839097  600107 status.go:384] host is not running, skipping remaining checks
	I1025 09:45:48.839105  600107 status.go:176] ha-496112-m02 status: &{Name:ha-496112-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:45:48.839125  600107 status.go:174] checking status of ha-496112-m03 ...
	I1025 09:45:48.839367  600107 cli_runner.go:164] Run: docker container inspect ha-496112-m03 --format={{.State.Status}}
	I1025 09:45:48.860756  600107 status.go:371] ha-496112-m03 host status = "Running" (err=<nil>)
	I1025 09:45:48.860805  600107 host.go:66] Checking if "ha-496112-m03" exists ...
	I1025 09:45:48.861150  600107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-496112-m03
	I1025 09:45:48.881731  600107 host.go:66] Checking if "ha-496112-m03" exists ...
	I1025 09:45:48.882055  600107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:45:48.882132  600107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-496112-m03
	I1025 09:45:48.901941  600107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33188 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/ha-496112-m03/id_rsa Username:docker}
	I1025 09:45:49.004693  600107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:45:49.020341  600107 kubeconfig.go:125] found "ha-496112" server: "https://192.168.49.254:8443"
	I1025 09:45:49.020370  600107 api_server.go:166] Checking apiserver status ...
	I1025 09:45:49.020411  600107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:45:49.036371  600107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2123/cgroup
	W1025 09:45:49.045800  600107 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2123/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:45:49.045857  600107 ssh_runner.go:195] Run: ls
	I1025 09:45:49.049944  600107 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:45:49.055253  600107 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:45:49.055279  600107 status.go:463] ha-496112-m03 apiserver status = Running (err=<nil>)
	I1025 09:45:49.055290  600107 status.go:176] ha-496112-m03 status: &{Name:ha-496112-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:45:49.055305  600107 status.go:174] checking status of ha-496112-m04 ...
	I1025 09:45:49.055628  600107 cli_runner.go:164] Run: docker container inspect ha-496112-m04 --format={{.State.Status}}
	I1025 09:45:49.075352  600107 status.go:371] ha-496112-m04 host status = "Running" (err=<nil>)
	I1025 09:45:49.075383  600107 host.go:66] Checking if "ha-496112-m04" exists ...
	I1025 09:45:49.075693  600107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-496112-m04
	I1025 09:45:49.094761  600107 host.go:66] Checking if "ha-496112-m04" exists ...
	I1025 09:45:49.095055  600107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:45:49.095097  600107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-496112-m04
	I1025 09:45:49.115504  600107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/ha-496112-m04/id_rsa Username:docker}
	I1025 09:45:49.216220  600107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:45:49.230312  600107 status.go:176] ha-496112-m04 status: &{Name:ha-496112-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (38.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 node start m02 --alsologtostderr -v 5: (37.299815008s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (38.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.011550391s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 stop --alsologtostderr -v 5
E1025 09:46:29.665173  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:29.671711  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:29.683453  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:29.705064  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:29.746458  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:29.827913  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:29.989516  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:30.311357  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:30.953498  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:32.235881  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:34.797761  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:39.919961  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:46:50.162087  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 stop --alsologtostderr -v 5: (33.87172853s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 start --wait true --alsologtostderr -v 5
E1025 09:47:10.643627  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:47:51.605160  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:48:07.897237  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:49:13.527017  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 start --wait true --alsologtostderr -v 5: (2m11.20862242s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (165.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 node delete m03 --alsologtostderr -v 5: (8.594742422s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 stop --alsologtostderr -v 5: (32.474137438s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5: exit status 7 (119.826803ms)

                                                
                                                
-- stdout --
	ha-496112
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-496112-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-496112-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:49:57.267414  630901 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:49:57.267762  630901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:57.267773  630901 out.go:374] Setting ErrFile to fd 2...
	I1025 09:49:57.267776  630901 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:49:57.268004  630901 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:49:57.268186  630901 out.go:368] Setting JSON to false
	I1025 09:49:57.268220  630901 mustload.go:65] Loading cluster: ha-496112
	I1025 09:49:57.268356  630901 notify.go:220] Checking for updates...
	I1025 09:49:57.268557  630901 config.go:182] Loaded profile config "ha-496112": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:49:57.268571  630901 status.go:174] checking status of ha-496112 ...
	I1025 09:49:57.269079  630901 cli_runner.go:164] Run: docker container inspect ha-496112 --format={{.State.Status}}
	I1025 09:49:57.288256  630901 status.go:371] ha-496112 host status = "Stopped" (err=<nil>)
	I1025 09:49:57.288285  630901 status.go:384] host is not running, skipping remaining checks
	I1025 09:49:57.288291  630901 status.go:176] ha-496112 status: &{Name:ha-496112 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:49:57.288327  630901 status.go:174] checking status of ha-496112-m02 ...
	I1025 09:49:57.288651  630901 cli_runner.go:164] Run: docker container inspect ha-496112-m02 --format={{.State.Status}}
	I1025 09:49:57.306648  630901 status.go:371] ha-496112-m02 host status = "Stopped" (err=<nil>)
	I1025 09:49:57.306688  630901 status.go:384] host is not running, skipping remaining checks
	I1025 09:49:57.306700  630901 status.go:176] ha-496112-m02 status: &{Name:ha-496112-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:49:57.306723  630901 status.go:174] checking status of ha-496112-m04 ...
	I1025 09:49:57.307004  630901 cli_runner.go:164] Run: docker container inspect ha-496112-m04 --format={{.State.Status}}
	I1025 09:49:57.324703  630901 status.go:371] ha-496112-m04 host status = "Stopped" (err=<nil>)
	I1025 09:49:57.324724  630901 status.go:384] host is not running, skipping remaining checks
	I1025 09:49:57.324730  630901 status.go:176] ha-496112-m04 status: &{Name:ha-496112-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E1025 09:51:29.665094  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m40.019585335s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 node add --control-plane --alsologtostderr -v 5
E1025 09:51:57.373729  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-496112 node add --control-plane --alsologtostderr -v 5: (46.149368879s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-496112 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-722017 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-722017 --driver=docker  --container-runtime=docker: (23.30569485s)
--- PASS: TestImageBuild/serial/Setup (23.31s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-722017
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-722017: (1.156350713s)
--- PASS: TestImageBuild/serial/NormalBuild (1.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-722017
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-722017
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-722017
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.5s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-201952 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
E1025 09:53:07.899823  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-201952 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m1.495866186s)
--- PASS: TestJSONOutput/start/Command (61.50s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-201952 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.49s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-201952 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.49s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-201952 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-201952 --output=json --user=testUser: (10.913759218s)
--- PASS: TestJSONOutput/stop/Command (10.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-355622 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-355622 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.869212ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f905ecf0-4a6b-45ea-a2c5-05174523ee55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-355622] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"79dae3d7-bf97-428e-b278-5a1b8f6a837b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"461db705-b91d-4a04-b0ff-54f96b188118","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"90cf491c-0cee-410f-8962-913eedeb3d31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig"}}
	{"specversion":"1.0","id":"cc3ba261-0f06-4d74-aaad-9f5fc110622c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube"}}
	{"specversion":"1.0","id":"7cf66f9d-c707-472a-848d-7a7f948f5fc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"971a2b0b-3488-4873-9c3f-2dbfea5af46c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2b59cb9a-f6f4-4c44-a374-2f83e7e63d0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-355622" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-355622
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-750473 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-750473 --network=: (21.938988501s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-750473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-750473
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-750473: (2.170283154s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.13s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-378838 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-378838 --network=bridge: (20.391599211s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-378838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-378838
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-378838: (2.022488174s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.44s)

                                                
                                    
x
+
TestKicExistingNetwork (27.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1025 09:55:05.653284  503346 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1025 09:55:05.671350  503346 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1025 09:55:05.671436  503346 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1025 09:55:05.671462  503346 cli_runner.go:164] Run: docker network inspect existing-network
W1025 09:55:05.689083  503346 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1025 09:55:05.689124  503346 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1025 09:55:05.689141  503346 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1025 09:55:05.689257  503346 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1025 09:55:05.708712  503346 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed3c1622b44b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:ba:2f:2c:44:75} reservation:<nil>}
I1025 09:55:05.709320  503346 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00196eef0}
I1025 09:55:05.709365  503346 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1025 09:55:05.709431  503346 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1025 09:55:05.768450  503346 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-473227 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-473227 --network=existing-network: (25.505841473s)
helpers_test.go:175: Cleaning up "existing-network-473227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-473227
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-473227: (2.030050601s)
I1025 09:55:33.324834  503346 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.69s)

                                                
                                    
x
+
TestKicCustomSubnet (24.2s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-913154 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-913154 --subnet=192.168.60.0/24: (21.992221857s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-913154 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-913154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-913154
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-913154: (2.188515993s)
--- PASS: TestKicCustomSubnet (24.20s)

                                                
                                    
x
+
TestKicStaticIP (25.31s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-084221 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-084221 --static-ip=192.168.200.200: (22.960380948s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-084221 ip
helpers_test.go:175: Cleaning up "static-ip-084221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-084221
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-084221: (2.201037023s)
--- PASS: TestKicStaticIP (25.31s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (51.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-776997 --driver=docker  --container-runtime=docker
E1025 09:56:29.664794  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-776997 --driver=docker  --container-runtime=docker: (23.301830353s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-779177 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-779177 --driver=docker  --container-runtime=docker: (22.817945426s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-776997
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-779177
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-779177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-779177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-779177: (2.209231664s)
helpers_test.go:175: Cleaning up "first-776997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-776997
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-776997: (2.210138293s)
--- PASS: TestMinikubeProfile (51.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.48s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-384608 --memory=3072 --mount-string /tmp/TestMountStartserial3586403312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-384608 --memory=3072 --mount-string /tmp/TestMountStartserial3586403312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.478971262s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-384608 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-417828 --memory=3072 --mount-string /tmp/TestMountStartserial3586403312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-417828 --memory=3072 --mount-string /tmp/TestMountStartserial3586403312/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.66709533s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-417828 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-384608 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-384608 --alsologtostderr -v=5: (1.581359028s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-417828 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-417828
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-417828: (1.266313327s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-417828
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-417828: (7.371640359s)
--- PASS: TestMountStart/serial/RestartStopped (8.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-417828 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315751 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E1025 09:58:07.897181  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315751 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.169784677s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-315751 -- rollout status deployment/busybox: (2.667899545s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-nhgjg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-xm8mh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-nhgjg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-xm8mh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-nhgjg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-xm8mh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-nhgjg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-nhgjg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-xm8mh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-315751 -- exec busybox-7b57f96db7-xm8mh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (31.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-315751 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-315751 -v=5 --alsologtostderr: (31.053197817s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (31.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-315751 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp testdata/cp-test.txt multinode-315751:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1924405030/001/cp-test_multinode-315751.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751:/home/docker/cp-test.txt multinode-315751-m02:/home/docker/cp-test_multinode-315751_multinode-315751-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m02 "sudo cat /home/docker/cp-test_multinode-315751_multinode-315751-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751:/home/docker/cp-test.txt multinode-315751-m03:/home/docker/cp-test_multinode-315751_multinode-315751-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m03 "sudo cat /home/docker/cp-test_multinode-315751_multinode-315751-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp testdata/cp-test.txt multinode-315751-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1924405030/001/cp-test_multinode-315751-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751-m02:/home/docker/cp-test.txt multinode-315751:/home/docker/cp-test_multinode-315751-m02_multinode-315751.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751 "sudo cat /home/docker/cp-test_multinode-315751-m02_multinode-315751.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751-m02:/home/docker/cp-test.txt multinode-315751-m03:/home/docker/cp-test_multinode-315751-m02_multinode-315751-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m03 "sudo cat /home/docker/cp-test_multinode-315751-m02_multinode-315751-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp testdata/cp-test.txt multinode-315751-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1924405030/001/cp-test_multinode-315751-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751-m03:/home/docker/cp-test.txt multinode-315751:/home/docker/cp-test_multinode-315751-m03_multinode-315751.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751 "sudo cat /home/docker/cp-test_multinode-315751-m03_multinode-315751.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 cp multinode-315751-m03:/home/docker/cp-test.txt multinode-315751-m02:/home/docker/cp-test_multinode-315751-m03_multinode-315751-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 ssh -n multinode-315751-m02 "sudo cat /home/docker/cp-test_multinode-315751-m03_multinode-315751-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-315751 node stop m03: (1.277539591s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315751 status: exit status 7 (508.214169ms)

                                                
                                                
-- stdout --
	multinode-315751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-315751-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-315751-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr: exit status 7 (505.372313ms)

                                                
                                                
-- stdout --
	multinode-315751
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-315751-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-315751-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:59:56.217478  713726 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:59:56.217629  713726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:59:56.217635  713726 out.go:374] Setting ErrFile to fd 2...
	I1025 09:59:56.217639  713726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:59:56.217824  713726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 09:59:56.218002  713726 out.go:368] Setting JSON to false
	I1025 09:59:56.218032  713726 mustload.go:65] Loading cluster: multinode-315751
	I1025 09:59:56.218194  713726 notify.go:220] Checking for updates...
	I1025 09:59:56.218379  713726 config.go:182] Loaded profile config "multinode-315751": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 09:59:56.218392  713726 status.go:174] checking status of multinode-315751 ...
	I1025 09:59:56.218828  713726 cli_runner.go:164] Run: docker container inspect multinode-315751 --format={{.State.Status}}
	I1025 09:59:56.237207  713726 status.go:371] multinode-315751 host status = "Running" (err=<nil>)
	I1025 09:59:56.237231  713726 host.go:66] Checking if "multinode-315751" exists ...
	I1025 09:59:56.237512  713726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-315751
	I1025 09:59:56.255301  713726 host.go:66] Checking if "multinode-315751" exists ...
	I1025 09:59:56.255573  713726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:59:56.255657  713726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-315751
	I1025 09:59:56.273699  713726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33303 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/multinode-315751/id_rsa Username:docker}
	I1025 09:59:56.371459  713726 ssh_runner.go:195] Run: systemctl --version
	I1025 09:59:56.378070  713726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:59:56.390917  713726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:59:56.447537  713726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-25 09:59:56.438257387 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652084736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1025 09:59:56.448118  713726 kubeconfig.go:125] found "multinode-315751" server: "https://192.168.67.2:8443"
	I1025 09:59:56.448149  713726 api_server.go:166] Checking apiserver status ...
	I1025 09:59:56.448186  713726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:59:56.460708  713726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2165/cgroup
	W1025 09:59:56.469366  713726 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2165/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:59:56.469422  713726 ssh_runner.go:195] Run: ls
	I1025 09:59:56.473328  713726 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1025 09:59:56.477491  713726 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1025 09:59:56.477513  713726 status.go:463] multinode-315751 apiserver status = Running (err=<nil>)
	I1025 09:59:56.477524  713726 status.go:176] multinode-315751 status: &{Name:multinode-315751 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:59:56.477542  713726 status.go:174] checking status of multinode-315751-m02 ...
	I1025 09:59:56.477820  713726 cli_runner.go:164] Run: docker container inspect multinode-315751-m02 --format={{.State.Status}}
	I1025 09:59:56.495818  713726 status.go:371] multinode-315751-m02 host status = "Running" (err=<nil>)
	I1025 09:59:56.495852  713726 host.go:66] Checking if "multinode-315751-m02" exists ...
	I1025 09:59:56.496168  713726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-315751-m02
	I1025 09:59:56.513567  713726 host.go:66] Checking if "multinode-315751-m02" exists ...
	I1025 09:59:56.513924  713726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:59:56.513993  713726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-315751-m02
	I1025 09:59:56.531386  713726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33308 SSHKeyPath:/home/jenkins/minikube-integration/21767-499776/.minikube/machines/multinode-315751-m02/id_rsa Username:docker}
	I1025 09:59:56.629364  713726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:59:56.642622  713726 status.go:176] multinode-315751-m02 status: &{Name:multinode-315751-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:59:56.642658  713726 status.go:174] checking status of multinode-315751-m03 ...
	I1025 09:59:56.642898  713726 cli_runner.go:164] Run: docker container inspect multinode-315751-m03 --format={{.State.Status}}
	I1025 09:59:56.660638  713726 status.go:371] multinode-315751-m03 host status = "Stopped" (err=<nil>)
	I1025 09:59:56.660665  713726 status.go:384] host is not running, skipping remaining checks
	I1025 09:59:56.660671  713726 status.go:176] multinode-315751-m03 status: &{Name:multinode-315751-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-315751 node start m03 -v=5 --alsologtostderr: (8.540156796s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-315751
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-315751
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-315751: (22.816687652s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315751 --wait=true -v=5 --alsologtostderr
E1025 10:01:10.974908  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315751 --wait=true -v=5 --alsologtostderr: (49.281369465s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-315751
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-315751 node delete m03: (4.738871215s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 stop
E1025 10:01:29.665564  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-315751 stop: (21.667469972s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315751 status: exit status 7 (105.200163ms)

                                                
                                                
-- stdout --
	multinode-315751
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-315751-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr: exit status 7 (102.824239ms)

                                                
                                                
-- stdout --
	multinode-315751
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-315751-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:01:45.382330  728391 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:01:45.382429  728391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:01:45.382436  728391 out.go:374] Setting ErrFile to fd 2...
	I1025 10:01:45.382440  728391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:01:45.382637  728391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-499776/.minikube/bin
	I1025 10:01:45.382817  728391 out.go:368] Setting JSON to false
	I1025 10:01:45.382853  728391 mustload.go:65] Loading cluster: multinode-315751
	I1025 10:01:45.382984  728391 notify.go:220] Checking for updates...
	I1025 10:01:45.383242  728391 config.go:182] Loaded profile config "multinode-315751": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
	I1025 10:01:45.383257  728391 status.go:174] checking status of multinode-315751 ...
	I1025 10:01:45.383702  728391 cli_runner.go:164] Run: docker container inspect multinode-315751 --format={{.State.Status}}
	I1025 10:01:45.402746  728391 status.go:371] multinode-315751 host status = "Stopped" (err=<nil>)
	I1025 10:01:45.402791  728391 status.go:384] host is not running, skipping remaining checks
	I1025 10:01:45.402801  728391 status.go:176] multinode-315751 status: &{Name:multinode-315751 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:01:45.402833  728391 status.go:174] checking status of multinode-315751-m02 ...
	I1025 10:01:45.403112  728391 cli_runner.go:164] Run: docker container inspect multinode-315751-m02 --format={{.State.Status}}
	I1025 10:01:45.421308  728391 status.go:371] multinode-315751-m02 host status = "Stopped" (err=<nil>)
	I1025 10:01:45.421337  728391 status.go:384] host is not running, skipping remaining checks
	I1025 10:01:45.421343  728391 status.go:176] multinode-315751-m02 status: &{Name:multinode-315751-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315751 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315751 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (52.602428324s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-315751 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (30.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-315751
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315751-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-315751-m02 --driver=docker  --container-runtime=docker: exit status 14 (89.659227ms)

                                                
                                                
-- stdout --
	* [multinode-315751-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-315751-m02' is duplicated with machine name 'multinode-315751-m02' in profile 'multinode-315751'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-315751-m03 --driver=docker  --container-runtime=docker
E1025 10:02:52.737791  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-315751-m03 --driver=docker  --container-runtime=docker: (27.360146181s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-315751
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-315751: exit status 80 (306.04382ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-315751 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-315751-m03 already exists in multinode-315751-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-315751-m03
E1025 10:03:07.897075  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-315751-m03: (2.218289012s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (30.04s)

                                                
                                    
x
+
TestPreload (105.81s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-977271 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-977271 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (41.31723855s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-977271 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-977271 image pull gcr.io/k8s-minikube/busybox: (1.650387192s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-977271
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-977271: (10.889628606s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-977271 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-977271 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (49.386344997s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-977271 image list
helpers_test.go:175: Cleaning up "test-preload-977271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-977271
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-977271: (2.329433382s)
--- PASS: TestPreload (105.81s)

                                                
                                    
x
+
TestSkaffold (76.39s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2442934538 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-268858 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-268858 --memory=3072 --driver=docker  --container-runtime=docker: (22.902831946s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2442934538 run --minikube-profile skaffold-268858 --kube-context skaffold-268858 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2442934538 run --minikube-profile skaffold-268858 --kube-context skaffold-268858 --status-check=true --port-forward=false --interactive=false: (38.49472957s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-846844fc8c-6jmks" [04fc2304-66f0-4000-a9aa-ad49227d2333] Running
E1025 10:06:29.664756  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003840231s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-8f7f8b8f5-jvjqg" [652c293c-0c3b-4b6d-90c4-0325ecdbc539] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004548803s
helpers_test.go:175: Cleaning up "skaffold-268858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-268858
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-268858: (3.185275276s)
--- PASS: TestSkaffold (76.39s)

                                                
                                    
x
+
TestInsufficientStorage (10.87s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-256177 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-256177 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.512139715s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5e6fac53-9bc5-48ad-b0d7-80bc92a0b84c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-256177] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a419b6d0-1b46-49ba-9d5a-c6e6426fab30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"403f62fa-9782-4fe7-9c37-13e7372cd10d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eb4781ba-c4df-40ee-afbb-ce4615ea8f13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig"}}
	{"specversion":"1.0","id":"b9e97bf2-3b0b-4950-a0cf-b5dd9dac3e09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube"}}
	{"specversion":"1.0","id":"629c3d59-1aee-4357-a033-c9af8228d212","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f4f2ff00-7927-4a6f-b031-8d108d0fb815","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a07e8d8f-b62e-4267-b664-21bc23a3dfa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5ce674a5-1159-47b6-9a34-6dd544a17f0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6f59201b-886a-4213-a01b-5caa1d0b8457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"217d2a50-b4de-4926-aa38-6597c5889034","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"656c8ac5-3096-4d99-9abf-8cd675c1e6cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-256177\" primary control-plane node in \"insufficient-storage-256177\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17d77bb8-cbba-4675-9d41-eb4e41fc4b54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4b66caf-f27c-4886-87c5-c1dcaf456b84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b2312d5-7815-4bd8-bda4-5c43f7ba483e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-256177 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-256177 --output=json --layout=cluster: exit status 7 (296.639475ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-256177","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-256177","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:06:51.267907  765405 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-256177" does not appear in /home/jenkins/minikube-integration/21767-499776/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-256177 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-256177 --output=json --layout=cluster: exit status 7 (293.739941ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-256177","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-256177","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:06:51.562020  765515 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-256177" does not appear in /home/jenkins/minikube-integration/21767-499776/kubeconfig
	E1025 10:06:51.572571  765515 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/insufficient-storage-256177/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-256177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-256177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-256177: (1.765769417s)
--- PASS: TestInsufficientStorage (10.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1367703572 start -p running-upgrade-095595 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1367703572 start -p running-upgrade-095595 --memory=3072 --vm-driver=docker  --container-runtime=docker: (25.903763303s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-095595 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-095595 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.076722737s)
helpers_test.go:175: Cleaning up "running-upgrade-095595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-095595
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-095595: (2.313334318s)
--- PASS: TestRunningBinaryUpgrade (50.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.090785539s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-046107
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-046107: (10.93418149s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-046107 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-046107 status --format={{.Host}}: exit status 7 (102.896501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m34.778107445s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-046107 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (102.339788ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-046107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-046107
	    minikube start -p kubernetes-upgrade-046107 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0461072 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-046107 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1025 10:14:12.126513  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-046107 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.677503605s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-046107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-046107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-046107: (2.828152292s)
--- PASS: TestKubernetesUpgrade (347.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (86.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1332763759 start -p missing-upgrade-873158 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1332763759 start -p missing-upgrade-873158 --memory=3072 --driver=docker  --container-runtime=docker: (27.703902174s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-873158
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-873158: (10.453513622s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-873158
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-873158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-873158 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.848638271s)
helpers_test.go:175: Cleaning up "missing-upgrade-873158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-873158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-873158: (2.283598829s)
--- PASS: TestMissingContainerUpgrade (86.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-661857 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-661857 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (107.466722ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-661857] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-499776/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-499776/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-661857 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-661857 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.261065637s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-661857 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (33.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-661857 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-661857 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.833967174s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-661857 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-661857 status -o json: exit status 2 (332.118814ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-661857","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-661857
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-661857: (1.982885555s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (33.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-661857 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-661857 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.059248379s)
--- PASS: TestNoKubernetes/serial/Start (7.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3687987364 start -p stopped-upgrade-204346 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3687987364 start -p stopped-upgrade-204346 --memory=3072 --vm-driver=docker  --container-runtime=docker: (44.634065458s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3687987364 -p stopped-upgrade-204346 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3687987364 -p stopped-upgrade-204346 stop: (10.833807649s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-204346 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-204346 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (16.693508741s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-661857 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-661857 "sudo systemctl is-active --quiet service kubelet": exit status 1 (348.816738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (5.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-661857
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-661857: (5.690942152s)
--- PASS: TestNoKubernetes/serial/Stop (5.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-661857 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-661857 --driver=docker  --container-runtime=docker: (9.504668503s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-661857 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-661857 "sudo systemctl is-active --quiet service kubelet": exit status 1 (336.289334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-204346
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (38.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-371105 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-371105 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (38.379477099s)
--- PASS: TestPause/serial/Start (38.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (68.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m8.765630815s)
--- PASS: TestNetworkPlugins/group/auto/Start (68.77s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.06s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-371105 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-371105 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.03722743s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-797295 "pgrep -a kubelet"
I1025 10:11:24.585306  503346 config.go:182] Loaded profile config "auto-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dxtnn" [1d76adb3-26ec-4b11-be9c-1052170cb9a9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dxtnn" [1d76adb3-26ec-4b11-be9c-1052170cb9a9] Running
E1025 10:11:28.910600  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003671545s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-371105 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-371105 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-371105 --output=json --layout=cluster: exit status 2 (432.753689ms)

                                                
                                                
-- stdout --
	{"Name":"pause-371105","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-371105","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-371105 --alsologtostderr -v=5
E1025 10:11:28.262341  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:28.268777  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:28.280205  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:28.301655  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:28.343655  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-371105 --alsologtostderr -v=5
E1025 10:11:28.426028  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:28.588350  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-371105 --alsologtostderr -v=5
E1025 10:11:29.552567  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:29.664750  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:11:30.834889  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-371105 --alsologtostderr -v=5: (2.502853218s)
--- PASS: TestPause/serial/DeletePaused (2.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.276761371s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-371105
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-371105: exit status 1 (18.90419ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-371105: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (49.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (49.222233872s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (49.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E1025 10:11:48.760790  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m12.050737107s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (39.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E1025 10:12:09.242862  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (39.475337906s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (39.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-xc72d" [4fc58697-2b1c-44c4-83de-286119bf84b3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005514962s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-797295 "pgrep -a kubelet"
I1025 10:12:29.943705  503346 config.go:182] Loaded profile config "kindnet-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wwmm6" [3313490f-3829-4183-9c64-0caafeb3ec91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wwmm6" [3313490f-3829-4183-9c64-0caafeb3ec91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003202061s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-797295 "pgrep -a kubelet"
I1025 10:12:35.474630  503346 config.go:182] Loaded profile config "custom-flannel-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xs2wd" [5fec27c4-e52e-4d67-8db5-f81fe2510bbc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xs2wd" [5fec27c4-e52e-4d67-8db5-f81fe2510bbc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004043354s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-tr4bk" [7b7dcb37-78c6-4e38-b66f-6db948a15113] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003938948s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (72.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m12.23821631s)
--- PASS: TestNetworkPlugins/group/false/Start (72.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-797295 "pgrep -a kubelet"
I1025 10:13:05.706567  503346 config.go:182] Loaded profile config "calico-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-797295 replace --force -f testdata/netcat-deployment.yaml
I1025 10:13:06.563028  503346 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1025 10:13:06.572029  503346 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ggpnd" [72bd4419-3d03-425e-b652-1beb2689511f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ggpnd" [72bd4419-3d03-425e-b652-1beb2689511f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004218971s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E1025 10:13:07.897791  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (38.558865345s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (44.785781614s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-797295 "pgrep -a kubelet"
I1025 10:13:46.710574  503346 config.go:182] Loaded profile config "enable-default-cni-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7j7r4" [4d1c00c5-9da3-4a7a-a022-6a93eb661c5f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7j7r4" [4d1c00c5-9da3-4a7a-a022-6a93eb661c5f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004587404s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-797295 "pgrep -a kubelet"
I1025 10:14:14.240225  503346 config.go:182] Loaded profile config "false-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k5g7h" [d48dbbc9-bdb2-4e5a-8c73-9b28bd4bdbbe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k5g7h" [d48dbbc9-bdb2-4e5a-8c73-9b28bd4bdbbe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.003777437s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m10.177276778s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-q8wpn" [d924c904-7cea-49cd-8507-8f22d2586154] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003920721s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-797295 "pgrep -a kubelet"
I1025 10:14:30.990708  503346 config.go:182] Loaded profile config "flannel-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l265l" [ebfbc536-8876-448b-850d-4b8e19b5c654] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l265l" [ebfbc536-8876-448b-850d-4b8e19b5c654] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004022941s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (70.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-797295 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m10.91676636s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (70.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (78.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-484669 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-484669 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m18.833210103s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (78.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-696127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-696127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m19.384050561s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-797295 "pgrep -a kubelet"
I1025 10:15:26.797969  503346 config.go:182] Loaded profile config "bridge-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2622k" [7e6699d4-7d2c-4df2-90e6-d087689473f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2622k" [7e6699d4-7d2c-4df2-90e6-d087689473f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004327244s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-797295 "pgrep -a kubelet"
I1025 10:15:44.333604  503346 config.go:182] Loaded profile config "kubenet-797295": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-797295 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vknp5" [3c265cbb-4ab5-4e0e-906c-bd2f43ed12f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vknp5" [3c265cbb-4ab5-4e0e-906c-bd2f43ed12f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004206765s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-797295 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-797295 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)
E1025 10:17:56.171134  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-246472 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-246472 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m11.361071198s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-484669 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [269d7137-2da7-40c9-9dd6-2c18fa052c39] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [269d7137-2da7-40c9-9dd6-2c18fa052c39] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004407187s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-484669 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-484669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-484669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037281562s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-484669 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-484669 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-484669 --alsologtostderr -v=3: (10.923255562s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (1m4.563787517s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (64.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-696127 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d366e802-6b0e-4700-a486-c3ca5952c8c0] Pending
E1025 10:16:24.771200  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:24.777534  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:24.788969  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:24.810420  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:24.851876  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:24.933449  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:25.094748  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [d366e802-6b0e-4700-a486-c3ca5952c8c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1025 10:16:26.058178  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:27.340536  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [d366e802-6b0e-4700-a486-c3ca5952c8c0] Running
E1025 10:16:28.262663  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:29.665236  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/functional-013051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:29.902716  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003481949s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-696127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-484669 -n old-k8s-version-484669
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-484669 -n old-k8s-version-484669: exit status 7 (100.023494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-484669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-484669 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E1025 10:16:25.416700  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-484669 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (45.386199946s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-484669 -n old-k8s-version-484669
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-696127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-696127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032617733s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-696127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-696127 --alsologtostderr -v=3
E1025 10:16:35.024162  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:16:45.265476  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-696127 --alsologtostderr -v=3: (11.209810397s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-696127 -n no-preload-696127
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-696127 -n no-preload-696127: exit status 7 (100.98306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-696127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-696127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1025 10:16:55.968769  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/skaffold-268858/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:05.747482  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-696127 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (50.006413844s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-696127 -n no-preload-696127
E1025 10:17:36.322751  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-246472 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c47f4acd-8e10-436c-855a-c6240b1e8c8a] Pending
helpers_test.go:352: "busybox" [c47f4acd-8e10-436c-855a-c6240b1e8c8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c47f4acd-8e10-436c-855a-c6240b1e8c8a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003408442s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-246472 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xhtnq" [984d742e-80b6-4637-9549-77398dc5920d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003846315s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-246472 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-246472 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xhtnq" [984d742e-80b6-4637-9549-77398dc5920d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003874042s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-484669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-246472 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-246472 --alsologtostderr -v=3: (10.978427073s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-504715 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [23420c67-e2b6-4a40-beeb-b4021b85f711] Pending
helpers_test.go:352: "busybox" [23420c67-e2b6-4a40-beeb-b4021b85f711] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [23420c67-e2b6-4a40-beeb-b4021b85f711] Running
E1025 10:17:24.825745  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004193968s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-504715 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-484669 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-484669 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-484669 -n old-k8s-version-484669
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-484669 -n old-k8s-version-484669: exit status 2 (334.840727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-484669 -n old-k8s-version-484669
E1025 10:17:23.537184  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:23.543650  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:23.555074  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:23.576536  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:23.618086  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:23.699652  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-484669 -n old-k8s-version-484669: exit status 2 (332.49186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-484669 --alsologtostderr -v=1
E1025 10:17:23.861667  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:24.183546  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-484669 -n old-k8s-version-484669
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-484669 -n old-k8s-version-484669
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-990912 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-990912 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (31.190969926s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-246472 -n embed-certs-246472
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-246472 -n embed-certs-246472: exit status 7 (97.095555ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-246472 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-246472 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1025 10:17:28.669348  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-246472 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (47.675378588s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-246472 -n embed-certs-246472
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-504715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-504715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.731814204s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-504715 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-504715 --alsologtostderr -v=3
E1025 10:17:33.791504  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:35.674806  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:35.681798  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:35.693206  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:35.715200  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:35.756688  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:35.838325  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:36.000643  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-504715 --alsologtostderr -v=3: (11.345554254s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6b7qm" [ccb902a7-4b1e-49e8-9b59-7f84fb74ee56] Running
E1025 10:17:36.964132  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:38.245547  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:40.807374  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007116499s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6b7qm" [ccb902a7-4b1e-49e8-9b59-7f84fb74ee56] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004387674s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-696127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715: exit status 7 (96.386501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-504715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-504715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
E1025 10:17:44.033756  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:45.928771  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:46.709217  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/auto-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-504715 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (54.499883252s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-696127 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-696127 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-696127 -n no-preload-696127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-696127 -n no-preload-696127: exit status 2 (382.078999ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-696127 -n no-preload-696127
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-696127 -n no-preload-696127: exit status 2 (409.352633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-696127 --alsologtostderr -v=1
E1025 10:17:50.976364  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-696127 -n no-preload-696127
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-696127 -n no-preload-696127
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-990912 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1025 10:17:59.233644  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:59.240171  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:59.251777  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:59.273288  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:59.314819  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:59.396462  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:17:59.558401  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-990912 --alsologtostderr -v=3
E1025 10:17:59.880340  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:00.522517  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:01.804776  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:04.366853  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:04.515882  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:07.897161  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/addons-456159/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:09.488929  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-990912 --alsologtostderr -v=3: (11.015956633s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-990912 -n newest-cni-990912
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-990912 -n newest-cni-990912: exit status 7 (88.391952ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-990912 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-990912 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-990912 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.1: (12.051745321s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-990912 -n newest-cni-990912
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2qn7q" [bc87b989-266f-448e-8e51-8679e7285451] Running
E1025 10:18:16.653307  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/custom-flannel-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:19.730619  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004834311s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2qn7q" [bc87b989-266f-448e-8e51-8679e7285451] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004091182s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-246472 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-990912 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-990912 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-990912 -n newest-cni-990912
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-990912 -n newest-cni-990912: exit status 2 (324.860633ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-990912 -n newest-cni-990912
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-990912 -n newest-cni-990912: exit status 2 (321.441985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-990912 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-990912 -n newest-cni-990912
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-990912 -n newest-cni-990912
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-246472 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-246472 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-246472 -n embed-certs-246472
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-246472 -n embed-certs-246472: exit status 2 (322.235008ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-246472 -n embed-certs-246472
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-246472 -n embed-certs-246472: exit status 2 (331.361825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-246472 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-246472 -n embed-certs-246472
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-246472 -n embed-certs-246472
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sm5v2" [ec7640da-d7ce-42fe-9938-cc71275f9d5b] Running
E1025 10:18:40.212728  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/calico-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003931823s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sm5v2" [ec7640da-d7ce-42fe-9938-cc71275f9d5b] Running
E1025 10:18:45.478200  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/kindnet-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:46.886040  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:46.892547  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:46.903994  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:46.925502  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:46.967039  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:47.048710  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:47.210475  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:47.532280  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:18:48.173544  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00400212s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-504715 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-504715 image list --format=json
E1025 10:18:49.456182  503346 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/enable-default-cni-797295/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-504715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715: exit status 2 (325.87648ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715: exit status 2 (321.821098ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-504715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-504715 -n default-k8s-diff-port-504715
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.45s)

                                                
                                    

Test skip (22/345)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-797295 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-797295" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:07:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-652303
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-499776/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:07:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-637697
contexts:
- context:
cluster: cert-expiration-652303
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:07:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-652303
name: cert-expiration-652303
- context:
cluster: offline-docker-637697
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:07:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-docker-637697
name: offline-docker-637697
current-context: ""
kind: Config
users:
- name: cert-expiration-652303
user:
client-certificate: /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/cert-expiration-652303/client.crt
client-key: /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/cert-expiration-652303/client.key
- name: offline-docker-637697
user:
client-certificate: /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/offline-docker-637697/client.crt
client-key: /home/jenkins/minikube-integration/21767-499776/.minikube/profiles/offline-docker-637697/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-797295

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-797295" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-797295"

                                                
                                                
----------------------- debugLogs end: cilium-797295 [took: 4.060842699s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-797295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-797295
--- SKIP: TestNetworkPlugins/group/cilium (4.26s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-135019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-135019
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard