Test Report: Docker_Linux_containerd_arm64 21647

                    
                      f5f0858587e77e8c1559a01ec4b2a40a06b76dc9:2025-10-18:41961
                    
                

Test fail (12/331)

x
+
TestAddons/parallel/Ingress (492.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-897172 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-897172 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-897172 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [69e78953-0244-4b1b-b6b5-2de0b5385adf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-897172 -n addons-897172
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-10-18 12:14:05.226233445 +0000 UTC m=+804.483871554
addons_test.go:252: (dbg) Run:  kubectl --context addons-897172 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-897172 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-897172/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:06:04 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.34
IPs:
IP:  10.244.0.34
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sf6x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2sf6x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m1s                   default-scheduler  Successfully assigned default/nginx to addons-897172
Warning  Failed     6m34s (x3 over 7m44s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    5m1s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     5m (x2 over 8m)        kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     5m (x5 over 8m)        kubelet            Error: ErrImagePull
Warning  Failed     2m48s (x20 over 8m)    kubelet            Error: ImagePullBackOff
Normal   BackOff    2m33s (x21 over 8m)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-897172 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-897172 logs nginx -n default: exit status 1 (99.586569ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-897172 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-897172
helpers_test.go:243: (dbg) docker inspect addons-897172:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca",
	        "Created": "2025-10-18T12:01:21.360855514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2078122,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:01:21.422405524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/hosts",
	        "LogPath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca-json.log",
	        "Name": "/addons-897172",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-897172:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-897172",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca",
	                "LowerDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad-init/diff:/var/lib/docker/overlay2/647b2423f8222638985dff90791465004ec84c7fd61ca3830bba92bce09f80ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-897172",
	                "Source": "/var/lib/docker/volumes/addons-897172/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-897172",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-897172",
	                "name.minikube.sigs.k8s.io": "addons-897172",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7258c9852136886c5b8615dcf21b68c25fa67387a4a5f96112e0385d16ef7171",
	            "SandboxKey": "/var/run/docker/netns/7258c9852136",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35694"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35695"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35698"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35696"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35697"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-897172": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:79:be:e3:5f:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cd7773662be59cbfd50d24e7cd88733181b943a056c516a5ec6159cddc5c286",
	                    "EndpointID": "12138909070b7779605f90c0de940f5d02d7193a7f88f3958df6260c3dd6b0b4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-897172",
	                        "e79e9ade5524"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-897172 -n addons-897172
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 logs -n 25: (1.288608873s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-038567                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-038567   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ delete  │ -p download-only-110073                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-110073   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ start   │ --download-only -p download-docker-697075 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-697075 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ delete  │ -p download-docker-697075                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-697075 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ start   │ --download-only -p binary-mirror-441377 --alsologtostderr --binary-mirror http://127.0.0.1:43257 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-441377   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ delete  │ -p binary-mirror-441377                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-441377   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ addons  │ enable dashboard -p addons-897172                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ addons  │ disable dashboard -p addons-897172                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ start   │ -p addons-897172 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:03 UTC │
	│ addons  │ addons-897172 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons  │ addons-897172 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons  │ enable headlamp -p addons-897172 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons  │ addons-897172 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:05 UTC │
	│ ip      │ addons-897172 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:06 UTC │
	│ addons  │ addons-897172 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:08 UTC │ 18 Oct 25 12:09 UTC │
	│ addons  │ addons-897172 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-897172                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ addons  │ addons-897172 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:00:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:00:54.367432 2077724 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:00:54.368114 2077724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:54.368130 2077724 out.go:374] Setting ErrFile to fd 2...
	I1018 12:00:54.368136 2077724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:54.368684 2077724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:00:54.369208 2077724 out.go:368] Setting JSON to false
	I1018 12:00:54.370052 2077724 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":49402,"bootTime":1760739453,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:00:54.370161 2077724 start.go:141] virtualization:  
	I1018 12:00:54.373464 2077724 out.go:179] * [addons-897172] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:00:54.377392 2077724 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:00:54.377465 2077724 notify.go:220] Checking for updates...
	I1018 12:00:54.383184 2077724 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:00:54.385984 2077724 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:00:54.388882 2077724 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:00:54.391737 2077724 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:00:54.394572 2077724 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:00:54.397755 2077724 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:00:54.427244 2077724 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:00:54.427379 2077724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:54.485624 2077724 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:00:54.476724944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:54.485733 2077724 docker.go:318] overlay module found
	I1018 12:00:54.488785 2077724 out.go:179] * Using the docker driver based on user configuration
	I1018 12:00:54.491498 2077724 start.go:305] selected driver: docker
	I1018 12:00:54.491514 2077724 start.go:925] validating driver "docker" against <nil>
	I1018 12:00:54.491528 2077724 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:00:54.492226 2077724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:54.548122 2077724 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:00:54.539431163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:54.548285 2077724 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:00:54.548516 2077724 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:00:54.551343 2077724 out.go:179] * Using Docker driver with root privileges
	I1018 12:00:54.555054 2077724 cni.go:84] Creating CNI manager for ""
	I1018 12:00:54.555123 2077724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:00:54.555136 2077724 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:00:54.555213 2077724 start.go:349] cluster config:
	{Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:54.558350 2077724 out.go:179] * Starting "addons-897172" primary control-plane node in "addons-897172" cluster
	I1018 12:00:54.561132 2077724 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1018 12:00:54.564032 2077724 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:00:54.566883 2077724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:00:54.566908 2077724 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:00:54.566928 2077724 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1018 12:00:54.566938 2077724 cache.go:58] Caching tarball of preloaded images
	I1018 12:00:54.567023 2077724 preload.go:233] Found /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 12:00:54.567033 2077724 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1018 12:00:54.567357 2077724 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/config.json ...
	I1018 12:00:54.567389 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/config.json: {Name:mkafcabb28ec6f80973f821bd3a3501eb808e73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:54.583536 2077724 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:00:54.583654 2077724 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 12:00:54.583679 2077724 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 12:00:54.583687 2077724 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 12:00:54.583695 2077724 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 12:00:54.583707 2077724 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:01:12.760461 2077724 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:01:12.760502 2077724 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:01:12.760547 2077724 start.go:360] acquireMachinesLock for addons-897172: {Name:mk3faea9d4c04d1ecb221033ca1da8db432fda2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:01:12.760674 2077724 start.go:364] duration metric: took 103.645µs to acquireMachinesLock for "addons-897172"
	I1018 12:01:12.760703 2077724 start.go:93] Provisioning new machine with config: &{Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1018 12:01:12.760780 2077724 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:01:12.764181 2077724 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:01:12.764424 2077724 start.go:159] libmachine.API.Create for "addons-897172" (driver="docker")
	I1018 12:01:12.764458 2077724 client.go:168] LocalClient.Create starting
	I1018 12:01:12.764581 2077724 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem
	I1018 12:01:12.843489 2077724 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem
	I1018 12:01:14.748777 2077724 cli_runner.go:164] Run: docker network inspect addons-897172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:01:14.765481 2077724 cli_runner.go:211] docker network inspect addons-897172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:01:14.765564 2077724 network_create.go:284] running [docker network inspect addons-897172] to gather additional debugging logs...
	I1018 12:01:14.765587 2077724 cli_runner.go:164] Run: docker network inspect addons-897172
	W1018 12:01:14.780226 2077724 cli_runner.go:211] docker network inspect addons-897172 returned with exit code 1
	I1018 12:01:14.780265 2077724 network_create.go:287] error running [docker network inspect addons-897172]: docker network inspect addons-897172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-897172 not found
	I1018 12:01:14.780278 2077724 network_create.go:289] output of [docker network inspect addons-897172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-897172 not found
	
	** /stderr **
	I1018 12:01:14.780373 2077724 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:01:14.796588 2077724 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197bce0}
	I1018 12:01:14.796638 2077724 network_create.go:124] attempt to create docker network addons-897172 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:01:14.796700 2077724 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-897172 addons-897172
	I1018 12:01:14.851490 2077724 network_create.go:108] docker network addons-897172 192.168.49.0/24 created
	I1018 12:01:14.851536 2077724 kic.go:121] calculated static IP "192.168.49.2" for the "addons-897172" container
	I1018 12:01:14.851611 2077724 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:01:14.866418 2077724 cli_runner.go:164] Run: docker volume create addons-897172 --label name.minikube.sigs.k8s.io=addons-897172 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:01:14.883604 2077724 oci.go:103] Successfully created a docker volume addons-897172
	I1018 12:01:14.883714 2077724 cli_runner.go:164] Run: docker run --rm --name addons-897172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-897172 --entrypoint /usr/bin/test -v addons-897172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:01:17.017557 2077724 cli_runner.go:217] Completed: docker run --rm --name addons-897172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-897172 --entrypoint /usr/bin/test -v addons-897172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.133802704s)
	I1018 12:01:17.017613 2077724 oci.go:107] Successfully prepared a docker volume addons-897172
	I1018 12:01:17.017643 2077724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:01:17.017666 2077724 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:01:17.017730 2077724 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-897172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:01:21.286655 2077724 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-897172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.268863654s)
	I1018 12:01:21.286688 2077724 kic.go:203] duration metric: took 4.269019499s to extract preloaded images to volume ...
	W1018 12:01:21.286845 2077724 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:01:21.286955 2077724 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:01:21.345643 2077724 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-897172 --name addons-897172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-897172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-897172 --network addons-897172 --ip 192.168.49.2 --volume addons-897172:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:01:21.649683 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Running}}
	I1018 12:01:21.668062 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:21.688316 2077724 cli_runner.go:164] Run: docker exec addons-897172 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:01:21.740941 2077724 oci.go:144] the created container "addons-897172" has a running status.
	I1018 12:01:21.740969 2077724 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa...
	I1018 12:01:22.849770 2077724 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:01:22.868302 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:22.884155 2077724 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:01:22.884176 2077724 kic_runner.go:114] Args: [docker exec --privileged addons-897172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:01:22.921385 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:22.939317 2077724 machine.go:93] provisionDockerMachine start ...
	I1018 12:01:22.939422 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:22.955189 2077724 main.go:141] libmachine: Using SSH client type: native
	I1018 12:01:22.955511 2077724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35694 <nil> <nil>}
	I1018 12:01:22.955526 2077724 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:01:22.956181 2077724 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38028->127.0.0.1:35694: read: connection reset by peer
	I1018 12:01:26.111381 2077724 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-897172
	
	I1018 12:01:26.111408 2077724 ubuntu.go:182] provisioning hostname "addons-897172"
	I1018 12:01:26.111470 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.128279 2077724 main.go:141] libmachine: Using SSH client type: native
	I1018 12:01:26.128584 2077724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35694 <nil> <nil>}
	I1018 12:01:26.128600 2077724 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-897172 && echo "addons-897172" | sudo tee /etc/hostname
	I1018 12:01:26.288230 2077724 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-897172
	
	I1018 12:01:26.288352 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.305202 2077724 main.go:141] libmachine: Using SSH client type: native
	I1018 12:01:26.305506 2077724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35694 <nil> <nil>}
	I1018 12:01:26.305527 2077724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-897172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-897172/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-897172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:01:26.451923 2077724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:01:26.451951 2077724 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-2075029/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-2075029/.minikube}
	I1018 12:01:26.451977 2077724 ubuntu.go:190] setting up certificates
	I1018 12:01:26.451991 2077724 provision.go:84] configureAuth start
	I1018 12:01:26.452052 2077724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-897172
	I1018 12:01:26.469410 2077724 provision.go:143] copyHostCerts
	I1018 12:01:26.469496 2077724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem (1078 bytes)
	I1018 12:01:26.469630 2077724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem (1123 bytes)
	I1018 12:01:26.469689 2077724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem (1675 bytes)
	I1018 12:01:26.469740 2077724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem org=jenkins.addons-897172 san=[127.0.0.1 192.168.49.2 addons-897172 localhost minikube]
	I1018 12:01:26.659179 2077724 provision.go:177] copyRemoteCerts
	I1018 12:01:26.659290 2077724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:01:26.659362 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.676340 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:26.779062 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:01:26.795497 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:01:26.812128 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:01:26.828758 2077724 provision.go:87] duration metric: took 376.741174ms to configureAuth
	I1018 12:01:26.828825 2077724 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:01:26.829047 2077724 config.go:182] Loaded profile config "addons-897172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:01:26.829062 2077724 machine.go:96] duration metric: took 3.889721187s to provisionDockerMachine
	I1018 12:01:26.829069 2077724 client.go:171] duration metric: took 14.064604819s to LocalClient.Create
	I1018 12:01:26.829101 2077724 start.go:167] duration metric: took 14.064678836s to libmachine.API.Create "addons-897172"
	I1018 12:01:26.829116 2077724 start.go:293] postStartSetup for "addons-897172" (driver="docker")
	I1018 12:01:26.829125 2077724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:01:26.829191 2077724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:01:26.829242 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.845537 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:26.947715 2077724 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:01:26.951011 2077724 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:01:26.951039 2077724 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:01:26.951049 2077724 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/addons for local assets ...
	I1018 12:01:26.951114 2077724 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/files for local assets ...
	I1018 12:01:26.951136 2077724 start.go:296] duration metric: took 122.014181ms for postStartSetup
	I1018 12:01:26.951446 2077724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-897172
	I1018 12:01:26.968927 2077724 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/config.json ...
	I1018 12:01:26.969212 2077724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:01:26.969262 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.985225 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:27.085559 2077724 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:01:27.090769 2077724 start.go:128] duration metric: took 14.329972845s to createHost
	I1018 12:01:27.090794 2077724 start.go:83] releasing machines lock for "addons-897172", held for 14.330107021s
	I1018 12:01:27.090866 2077724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-897172
	I1018 12:01:27.109332 2077724 ssh_runner.go:195] Run: cat /version.json
	I1018 12:01:27.109381 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:27.109411 2077724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:01:27.109469 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:27.129197 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:27.146686 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:27.231561 2077724 ssh_runner.go:195] Run: systemctl --version
	I1018 12:01:27.322256 2077724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:01:27.326495 2077724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:01:27.326570 2077724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:01:27.354793 2077724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:01:27.354861 2077724 start.go:495] detecting cgroup driver to use...
	I1018 12:01:27.354906 2077724 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:01:27.354985 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1018 12:01:27.371087 2077724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:01:27.383685 2077724 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:01:27.383767 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:01:27.401047 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:01:27.419011 2077724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:01:27.529680 2077724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:01:27.652118 2077724 docker.go:234] disabling docker service ...
	I1018 12:01:27.652240 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:01:27.673599 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:01:27.686866 2077724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:01:27.807668 2077724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:01:27.920948 2077724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:01:27.934142 2077724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:01:27.948534 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:01:27.957423 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:01:27.966616 2077724 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:01:27.966731 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:01:27.975832 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:01:27.985097 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:01:27.993939 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:01:28.005982 2077724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:01:28.015440 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:01:28.025181 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:01:28.034460 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:01:28.043710 2077724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:01:28.051750 2077724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:01:28.059497 2077724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:01:28.173162 2077724 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:01:28.316764 2077724 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1018 12:01:28.316853 2077724 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1018 12:01:28.320825 2077724 start.go:563] Will wait 60s for crictl version
	I1018 12:01:28.320889 2077724 ssh_runner.go:195] Run: which crictl
	I1018 12:01:28.324594 2077724 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:01:28.349583 2077724 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1018 12:01:28.349663 2077724 ssh_runner.go:195] Run: containerd --version
	I1018 12:01:28.376134 2077724 ssh_runner.go:195] Run: containerd --version
	I1018 12:01:28.403599 2077724 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1018 12:01:28.406585 2077724 cli_runner.go:164] Run: docker network inspect addons-897172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:01:28.422110 2077724 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:01:28.425820 2077724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:01:28.435586 2077724 kubeadm.go:883] updating cluster {Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:01:28.435720 2077724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:01:28.435788 2077724 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:01:28.464197 2077724 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:01:28.464219 2077724 containerd.go:534] Images already preloaded, skipping extraction
	I1018 12:01:28.464279 2077724 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:01:28.489338 2077724 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:01:28.489364 2077724 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:01:28.489372 2077724 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1018 12:01:28.489460 2077724 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-897172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:01:28.489530 2077724 ssh_runner.go:195] Run: sudo crictl info
	I1018 12:01:28.519199 2077724 cni.go:84] Creating CNI manager for ""
	I1018 12:01:28.519225 2077724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:01:28.519243 2077724 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:01:28.519266 2077724 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-897172 NodeName:addons-897172 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:01:28.519380 2077724 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-897172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:01:28.519447 2077724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:01:28.527048 2077724 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:01:28.527172 2077724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:01:28.534949 2077724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1018 12:01:28.547675 2077724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:01:28.561182 2077724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1018 12:01:28.574426 2077724 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:01:28.577899 2077724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:01:28.587519 2077724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:01:28.693533 2077724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:01:28.708498 2077724 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172 for IP: 192.168.49.2
	I1018 12:01:28.708516 2077724 certs.go:195] generating shared ca certs ...
	I1018 12:01:28.708533 2077724 certs.go:227] acquiring lock for ca certs: {Name:mkb3a5ce8c0a7d3b9a246d80f0747d48f33f9661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:28.708659 2077724 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key
	I1018 12:01:29.318591 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt ...
	I1018 12:01:29.318624 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt: {Name:mk234a1f1a44ab06efce70f0dc418f81fd52f0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.318850 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key ...
	I1018 12:01:29.318868 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key: {Name:mka82413e87eae9641ba66292e212613c5c4f977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.318963 2077724 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key
	I1018 12:01:29.775477 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt ...
	I1018 12:01:29.775511 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt: {Name:mke83871c09efb42f9667eaae56a4dff5477cb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.775707 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key ...
	I1018 12:01:29.775721 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key: {Name:mka883ece760b66f8a5b38807848cff872768cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.775809 2077724 certs.go:257] generating profile certs ...
	I1018 12:01:29.775886 2077724 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.key
	I1018 12:01:29.775904 2077724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt with IP's: []
	I1018 12:01:29.978082 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt ...
	I1018 12:01:29.978113 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: {Name:mkd275e345beeb52a9d8089878d464409925c9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.978299 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.key ...
	I1018 12:01:29.978312 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.key: {Name:mkb81e10ae3bcee03ab6c71bd6bd6256321bd770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.978398 2077724 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a
	I1018 12:01:29.978417 2077724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:01:30.446799 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a ...
	I1018 12:01:30.446832 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a: {Name:mk80d4e860a9a1031216fdb6a6e05fc29213cafd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.447021 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a ...
	I1018 12:01:30.447037 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a: {Name:mk6b8d83060337b7c23a69c8113d845615e2f56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.447125 2077724 certs.go:382] copying /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a -> /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt
	I1018 12:01:30.447204 2077724 certs.go:386] copying /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a -> /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key
	I1018 12:01:30.447263 2077724 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key
	I1018 12:01:30.447284 2077724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt with IP's: []
	I1018 12:01:30.649588 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt ...
	I1018 12:01:30.649619 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt: {Name:mk976758f26e7b84df2186a05da11e24b6ac783a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.649795 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key ...
	I1018 12:01:30.649808 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key: {Name:mk8ea210538a5a3b9d868226e9e956afca9f4cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.649993 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:01:30.650033 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:01:30.650065 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:01:30.650094 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem (1675 bytes)
	I1018 12:01:30.650741 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:01:30.669339 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:01:30.687955 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:01:30.705445 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:01:30.722717 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:01:30.739383 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:01:30.756114 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:01:30.772767 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:01:30.789076 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:01:30.805840 2077724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:01:30.817999 2077724 ssh_runner.go:195] Run: openssl version
	I1018 12:01:30.824201 2077724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:01:30.832590 2077724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:01:30.836438 2077724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:01:30.836509 2077724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:01:30.877575 2077724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:01:30.886197 2077724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:01:30.889693 2077724 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:01:30.889788 2077724 kubeadm.go:400] StartCluster: {Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:01:30.889898 2077724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1018 12:01:30.889999 2077724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:01:30.921802 2077724 cri.go:89] found id: ""
	I1018 12:01:30.921871 2077724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:01:30.932255 2077724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:01:30.940899 2077724 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:01:30.940964 2077724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:01:30.950860 2077724 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:01:30.950930 2077724 kubeadm.go:157] found existing configuration files:
	
	I1018 12:01:30.951020 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:01:30.959061 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:01:30.959125 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:01:30.966230 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:01:30.973779 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:01:30.973875 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:01:30.980783 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:01:30.988527 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:01:30.988622 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:01:30.995575 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:01:31.004254 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:01:31.004383 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:01:31.012448 2077724 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:01:31.054125 2077724 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:01:31.054425 2077724 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:01:31.077631 2077724 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:01:31.077756 2077724 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:01:31.077817 2077724 kubeadm.go:318] OS: Linux
	I1018 12:01:31.077891 2077724 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:01:31.077980 2077724 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:01:31.078068 2077724 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:01:31.078156 2077724 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:01:31.078235 2077724 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:01:31.078314 2077724 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:01:31.078421 2077724 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:01:31.078536 2077724 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:01:31.078619 2077724 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:01:31.156469 2077724 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:01:31.156645 2077724 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:01:31.156781 2077724 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:01:31.163353 2077724 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:01:31.169708 2077724 out.go:252]   - Generating certificates and keys ...
	I1018 12:01:31.169871 2077724 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:01:31.169978 2077724 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:01:31.525626 2077724 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:01:31.848832 2077724 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:01:32.393819 2077724 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:01:32.923032 2077724 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:01:33.652073 2077724 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:01:33.652225 2077724 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-897172 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:01:34.541290 2077724 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:01:34.541435 2077724 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-897172 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:01:34.931781 2077724 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:01:36.505574 2077724 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:01:37.367371 2077724 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:01:37.367532 2077724 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:01:38.122331 2077724 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:01:39.023171 2077724 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:01:39.920347 2077724 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:01:40.197284 2077724 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:01:40.949570 2077724 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:01:40.950113 2077724 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:01:40.954682 2077724 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:01:40.958145 2077724 out.go:252]   - Booting up control plane ...
	I1018 12:01:40.958256 2077724 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:01:40.958338 2077724 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:01:40.958415 2077724 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:01:40.974831 2077724 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:01:40.974951 2077724 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:01:40.982862 2077724 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:01:40.983156 2077724 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:01:40.983209 2077724 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:01:41.123684 2077724 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:01:41.123825 2077724 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:01:42.625547 2077724 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50193372s
	I1018 12:01:42.630971 2077724 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:01:42.631391 2077724 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:01:42.632452 2077724 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:01:42.632560 2077724 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:01:46.720675 2077724 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.087650261s
	I1018 12:01:47.176870 2077724 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.544343746s
	I1018 12:01:49.134763 2077724 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501996642s
	I1018 12:01:49.158015 2077724 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:01:49.183587 2077724 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:01:49.199370 2077724 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:01:49.199605 2077724 kubeadm.go:318] [mark-control-plane] Marking the node addons-897172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:01:49.212601 2077724 kubeadm.go:318] [bootstrap-token] Using token: p4hob0.9e6vf29erhsuavf2
	I1018 12:01:49.215600 2077724 out.go:252]   - Configuring RBAC rules ...
	I1018 12:01:49.215772 2077724 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:01:49.220164 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:01:49.231260 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:01:49.237790 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:01:49.242142 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:01:49.250360 2077724 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:01:49.541003 2077724 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:01:49.973935 2077724 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:01:50.542978 2077724 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:01:50.544541 2077724 kubeadm.go:318] 
	I1018 12:01:50.544618 2077724 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:01:50.544625 2077724 kubeadm.go:318] 
	I1018 12:01:50.544706 2077724 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:01:50.544712 2077724 kubeadm.go:318] 
	I1018 12:01:50.544738 2077724 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:01:50.544799 2077724 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:01:50.544852 2077724 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:01:50.544856 2077724 kubeadm.go:318] 
	I1018 12:01:50.544913 2077724 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:01:50.544917 2077724 kubeadm.go:318] 
	I1018 12:01:50.544967 2077724 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:01:50.544971 2077724 kubeadm.go:318] 
	I1018 12:01:50.545026 2077724 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:01:50.545104 2077724 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:01:50.545176 2077724 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:01:50.545202 2077724 kubeadm.go:318] 
	I1018 12:01:50.545291 2077724 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:01:50.545371 2077724 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:01:50.545375 2077724 kubeadm.go:318] 
	I1018 12:01:50.545463 2077724 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token p4hob0.9e6vf29erhsuavf2 \
	I1018 12:01:50.545571 2077724 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6ad86b1276159d70ddf959ffd2834e19bb4d7329ebde5370ec0afcbde1bef9f4 \
	I1018 12:01:50.545592 2077724 kubeadm.go:318] 	--control-plane 
	I1018 12:01:50.545596 2077724 kubeadm.go:318] 
	I1018 12:01:50.545685 2077724 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:01:50.545689 2077724 kubeadm.go:318] 
	I1018 12:01:50.546055 2077724 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token p4hob0.9e6vf29erhsuavf2 \
	I1018 12:01:50.546175 2077724 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6ad86b1276159d70ddf959ffd2834e19bb4d7329ebde5370ec0afcbde1bef9f4 
	I1018 12:01:50.550384 2077724 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:01:50.550639 2077724 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:01:50.550748 2077724 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:01:50.550769 2077724 cni.go:84] Creating CNI manager for ""
	I1018 12:01:50.550777 2077724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:01:50.553944 2077724 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:01:50.556856 2077724 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:01:50.560949 2077724 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:01:50.560970 2077724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:01:50.574166 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:01:50.877469 2077724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:01:50.877597 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:50.877658 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-897172 minikube.k8s.io/updated_at=2025_10_18T12_01_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-897172 minikube.k8s.io/primary=true
	I1018 12:01:51.034941 2077724 ops.go:34] apiserver oom_adj: -16
	I1018 12:01:51.035045 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:51.535966 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:52.036086 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:52.535702 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:53.035765 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:53.535332 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:54.035261 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:54.535816 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:55.036057 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:55.140647 2077724 kubeadm.go:1113] duration metric: took 4.263094222s to wait for elevateKubeSystemPrivileges
	I1018 12:01:55.140675 2077724 kubeadm.go:402] duration metric: took 24.250891332s to StartCluster
	I1018 12:01:55.140692 2077724 settings.go:142] acquiring lock: {Name:mkfe09c4f932c229739f9b782a8232962c7d94cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:55.140808 2077724 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:01:55.141214 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/kubeconfig: {Name:mkb34a50149724994ca0c2a0fd8679c156671366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:55.141423 2077724 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1018 12:01:55.141580 2077724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:01:55.141828 2077724 config.go:182] Loaded profile config "addons-897172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:01:55.141862 2077724 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:01:55.141992 2077724 addons.go:69] Setting yakd=true in profile "addons-897172"
	I1018 12:01:55.142018 2077724 addons.go:238] Setting addon yakd=true in "addons-897172"
	I1018 12:01:55.142049 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.142573 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.142832 2077724 addons.go:69] Setting inspektor-gadget=true in profile "addons-897172"
	I1018 12:01:55.142869 2077724 addons.go:238] Setting addon inspektor-gadget=true in "addons-897172"
	I1018 12:01:55.142897 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.143013 2077724 addons.go:69] Setting metrics-server=true in profile "addons-897172"
	I1018 12:01:55.143038 2077724 addons.go:238] Setting addon metrics-server=true in "addons-897172"
	I1018 12:01:55.143063 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.143304 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.143502 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.143890 2077724 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-897172"
	I1018 12:01:55.143911 2077724 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-897172"
	I1018 12:01:55.143941 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.144409 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.148350 2077724 addons.go:69] Setting registry=true in profile "addons-897172"
	I1018 12:01:55.148444 2077724 addons.go:238] Setting addon registry=true in "addons-897172"
	I1018 12:01:55.148496 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.149036 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.156763 2077724 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-897172"
	I1018 12:01:55.156798 2077724 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-897172"
	I1018 12:01:55.156839 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.157304 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.167922 2077724 addons.go:69] Setting registry-creds=true in profile "addons-897172"
	I1018 12:01:55.168012 2077724 addons.go:238] Setting addon registry-creds=true in "addons-897172"
	I1018 12:01:55.168080 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.168622 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.176046 2077724 addons.go:69] Setting cloud-spanner=true in profile "addons-897172"
	I1018 12:01:55.176085 2077724 addons.go:238] Setting addon cloud-spanner=true in "addons-897172"
	I1018 12:01:55.176131 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.176619 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.179938 2077724 addons.go:69] Setting storage-provisioner=true in profile "addons-897172"
	I1018 12:01:55.179980 2077724 addons.go:238] Setting addon storage-provisioner=true in "addons-897172"
	I1018 12:01:55.180027 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.180501 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.199055 2077724 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-897172"
	I1018 12:01:55.199136 2077724 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-897172"
	I1018 12:01:55.199169 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.199675 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.199884 2077724 out.go:179] * Verifying Kubernetes components...
	I1018 12:01:55.199057 2077724 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-897172"
	I1018 12:01:55.214980 2077724 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-897172"
	I1018 12:01:55.215338 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.199068 2077724 addons.go:69] Setting volcano=true in profile "addons-897172"
	I1018 12:01:55.232165 2077724 addons.go:238] Setting addon volcano=true in "addons-897172"
	I1018 12:01:55.232208 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.232926 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.274771 2077724 addons.go:69] Setting default-storageclass=true in profile "addons-897172"
	I1018 12:01:55.274810 2077724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-897172"
	I1018 12:01:55.275232 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.297279 2077724 addons.go:69] Setting gcp-auth=true in profile "addons-897172"
	I1018 12:01:55.297364 2077724 mustload.go:65] Loading cluster: addons-897172
	I1018 12:01:55.297624 2077724 config.go:182] Loaded profile config "addons-897172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:01:55.300970 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.327641 2077724 addons.go:69] Setting ingress=true in profile "addons-897172"
	I1018 12:01:55.327678 2077724 addons.go:238] Setting addon ingress=true in "addons-897172"
	I1018 12:01:55.327737 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.328548 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.353983 2077724 addons.go:69] Setting ingress-dns=true in profile "addons-897172"
	I1018 12:01:55.354027 2077724 addons.go:238] Setting addon ingress-dns=true in "addons-897172"
	I1018 12:01:55.354134 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.354781 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.365990 2077724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:01:55.199076 2077724 addons.go:69] Setting volumesnapshots=true in profile "addons-897172"
	I1018 12:01:55.390640 2077724 addons.go:238] Setting addon volumesnapshots=true in "addons-897172"
	I1018 12:01:55.390713 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.391452 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.433171 2077724 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:01:55.433346 2077724 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:01:55.433519 2077724 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:01:55.441196 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:01:55.441225 2077724 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:01:55.441290 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.460612 2077724 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:01:55.439982 2077724 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:01:55.460925 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:01:55.460999 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.440213 2077724 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-897172"
	I1018 12:01:55.483741 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.484206 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.486037 2077724 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:01:55.490506 2077724 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:01:55.493535 2077724 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:01:55.493596 2077724 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:01:55.493703 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.506237 2077724 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:01:55.506259 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:01:55.506331 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.511628 2077724 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:01:55.512478 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:01:55.512510 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:01:55.512601 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.525223 2077724 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1018 12:01:55.528058 2077724 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1018 12:01:55.530970 2077724 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1018 12:01:55.537409 2077724 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:01:55.537437 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1018 12:01:55.537506 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.544753 2077724 addons.go:238] Setting addon default-storageclass=true in "addons-897172"
	I1018 12:01:55.544840 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.545329 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.569150 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.569409 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:01:55.571780 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:01:55.574279 2077724 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:01:55.574378 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.581673 2077724 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:01:55.582002 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:01:55.601574 2077724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:01:55.617469 2077724 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:01:55.644075 2077724 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:01:55.644120 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:01:55.644203 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.617684 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:01:55.672933 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:01:55.673045 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.674079 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:01:55.687206 2077724 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:01:55.672749 2077724 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:01:55.672869 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:01:55.689267 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.690050 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.693283 2077724 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:01:55.693428 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:01:55.693491 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.701252 2077724 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:01:55.701277 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:01:55.704890 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.693292 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:01:55.693297 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:55.720303 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.721359 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.722009 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.722634 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:01:55.724068 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:01:55.724091 2077724 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:01:55.724149 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.728030 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:55.732920 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:01:55.734019 2077724 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:01:55.734062 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:01:55.734157 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.744423 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:01:55.752520 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:01:55.755494 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:01:55.762370 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:01:55.762405 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:01:55.762472 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.776480 2077724 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:01:55.780157 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.789164 2077724 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:01:55.792965 2077724 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:01:55.792988 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:01:55.793056 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.802265 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.807122 2077724 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:01:55.807142 2077724 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:01:55.807198 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.883711 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.884028 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.903640 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.912332 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.918683 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.924310 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	W1018 12:01:55.932197 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:55.932242 2077724 retry.go:31] will retry after 279.479591ms: ssh: handshake failed: EOF
	W1018 12:01:55.932427 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:55.932444 2077724 retry.go:31] will retry after 258.849701ms: ssh: handshake failed: EOF
	I1018 12:01:55.933803 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.952059 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.953960 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	W1018 12:01:55.955074 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:55.955097 2077724 retry.go:31] will retry after 338.346835ms: ssh: handshake failed: EOF
	I1018 12:01:55.956960 2077724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1018 12:01:56.194244 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:56.194272 2077724 retry.go:31] will retry after 492.979292ms: ssh: handshake failed: EOF
	I1018 12:01:56.599571 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:01:56.599597 2077724 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:01:56.636926 2077724 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:01:56.636951 2077724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:01:56.690746 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:01:56.691536 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:01:56.693476 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:01:56.694965 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:01:56.694984 2077724 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:01:56.704701 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:01:56.704724 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:01:56.720278 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:01:56.785529 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:01:56.807308 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:01:56.811611 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:01:56.811680 2077724 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:01:56.831119 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:01:56.831194 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:01:56.833344 2077724 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.23168759s)
	I1018 12:01:56.833537 2077724 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:01:56.834246 2077724 node_ready.go:35] waiting up to 6m0s for node "addons-897172" to be "Ready" ...
	I1018 12:01:56.845232 2077724 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:56.845301 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:01:56.866974 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:01:56.923573 2077724 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:01:56.923668 2077724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:01:56.930292 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:01:56.992626 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:01:56.992706 2077724 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:01:56.997386 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:01:56.997463 2077724 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:01:57.042848 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:01:57.042926 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:01:57.082695 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:57.099172 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:01:57.099247 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:01:57.153023 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:01:57.153043 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:01:57.190917 2077724 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:01:57.190995 2077724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:01:57.200014 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:01:57.228361 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:01:57.228435 2077724 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:01:57.283526 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:01:57.312033 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:01:57.312109 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:01:57.340058 2077724 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-897172" context rescaled to 1 replicas
	I1018 12:01:57.357577 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:01:57.376381 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:01:57.379146 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:01:57.379234 2077724 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:01:57.502013 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:01:57.502104 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:01:57.505617 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:01:57.544304 2077724 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:57.544380 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:01:57.714094 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:01:57.714170 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:01:57.791967 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:57.991772 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:01:57.991799 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:01:58.130707 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:01:58.130731 2077724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:01:58.349625 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:01:58.349657 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:01:58.526352 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:01:58.526376 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:01:58.785750 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:01:58.785771 2077724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1018 12:01:58.843652 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:01:59.047504 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:01:59.856212 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.165382232s)
	I1018 12:01:59.856472 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.1649146s)
	I1018 12:02:00.040994 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.34747908s)
	I1018 12:02:00.041163 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.320842941s)
	I1018 12:02:00.041247 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.255640201s)
	W1018 12:02:00.850342 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	W1018 12:02:02.905260 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:03.185091 2077724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:02:03.185178 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:02:03.214205 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:02:03.352867 2077724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:02:03.368311 2077724 addons.go:238] Setting addon gcp-auth=true in "addons-897172"
	I1018 12:02:03.368367 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:02:03.368819 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:02:03.398587 2077724 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:02:03.398651 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:02:03.427802 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:02:03.984133 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (7.176742656s)
	I1018 12:02:03.984189 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.117145885s)
	I1018 12:02:03.984368 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.054004146s)
	I1018 12:02:03.984449 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.901691865s)
	W1018 12:02:03.984470 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:03.984490 2077724 retry.go:31] will retry after 367.487904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:03.984579 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.78449376s)
	I1018 12:02:03.984593 2077724 addons.go:479] Verifying addon ingress=true in "addons-897172"
	I1018 12:02:03.984786 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.70118689s)
	I1018 12:02:03.984879 2077724 addons.go:479] Verifying addon registry=true in "addons-897172"
	I1018 12:02:03.985050 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.627392855s)
	I1018 12:02:03.985313 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.608855749s)
	I1018 12:02:03.985863 2077724 addons.go:479] Verifying addon metrics-server=true in "addons-897172"
	I1018 12:02:03.985372 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.479682153s)
	I1018 12:02:03.985483 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.193434489s)
	W1018 12:02:03.985916 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:02:03.985933 2077724 retry.go:31] will retry after 125.590522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:02:03.985650 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.938067626s)
	I1018 12:02:03.985964 2077724 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-897172"
	I1018 12:02:03.989146 2077724 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-897172 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:02:03.989181 2077724 out.go:179] * Verifying registry addon...
	I1018 12:02:03.989194 2077724 out.go:179] * Verifying ingress addon...
	I1018 12:02:03.993133 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:02:03.993241 2077724 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:02:03.995937 2077724 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:02:03.996736 2077724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:02:03.998618 2077724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:02:04.000919 2077724 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:02:04.004004 2077724 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:02:04.004047 2077724 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:02:04.038876 2077724 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:02:04.038899 2077724 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:02:04.074012 2077724 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:02:04.074034 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:02:04.099669 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:02:04.112594 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:02:04.130119 2077724 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:02:04.130440 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:04.130321 2077724 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:02:04.130512 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.130418 2077724 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:02:04.130569 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.353037 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:04.520764 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:04.521047 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.521091 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.005904 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:05.006357 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.009772 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.211469 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111762228s)
	I1018 12:02:05.217148 2077724 addons.go:479] Verifying addon gcp-auth=true in "addons-897172"
	I1018 12:02:05.220256 2077724 out.go:179] * Verifying gcp-auth addon...
	I1018 12:02:05.224050 2077724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:02:05.227512 2077724 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:02:05.227587 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:05.338461 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:05.500929 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.501767 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:05.502241 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.533106 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.420472367s)
	I1018 12:02:05.727930 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.747389 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.39426382s)
	W1018 12:02:05.747424 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:05.747442 2077724 retry.go:31] will retry after 231.418033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:05.979592 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:06.002289 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:06.003430 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.005110 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.227201 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.500574 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:06.500786 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.502926 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.727247 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:06.798356 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:06.798386 2077724 retry.go:31] will retry after 495.929746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:07.000180 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.000333 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:07.003712 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.227952 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.295034 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:07.499317 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.500616 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:07.501555 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.728023 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:07.838785 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:08.002192 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:08.005381 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.005908 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:02:08.131275 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:08.131305 2077724 retry.go:31] will retry after 603.765616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:08.227574 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.499166 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.500546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:08.501041 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.726750 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.736066 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:09.002062 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.002269 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:09.004228 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.227657 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.511224 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.511396 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.512265 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 12:02:09.616046 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:09.616077 2077724 retry.go:31] will retry after 664.404477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:09.726845 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.003446 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:10.003803 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:10.004781 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.227604 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.280977 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:10.338623 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:10.503834 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:10.504381 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:10.504705 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.727145 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.003525 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:11.003808 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:11.003891 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:02:11.101255 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:11.101293 2077724 retry.go:31] will retry after 2.822526788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:11.227105 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.499476 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:11.500898 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:11.502072 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.726863 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.999988 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:12.000680 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:12.003876 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:12.227623 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:12.499576 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:12.499781 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:12.501735 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:12.727601 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:12.837581 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:13.000332 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:13.000556 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:13.006040 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:13.226868 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.501079 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:13.501488 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:13.503157 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:13.727059 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.924147 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:14.001326 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:14.001826 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:14.006486 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:14.227218 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.501005 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:14.502446 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:14.504112 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:14.727285 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:14.728882 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:14.728907 2077724 retry.go:31] will retry after 3.431148696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:02:14.837889 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:14.999945 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:15.005830 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:15.006327 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:15.227792 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.498905 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:15.501272 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:15.501379 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:15.727358 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.998899 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:16.005611 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:16.005741 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:16.227666 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:16.500668 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:16.501270 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:16.501930 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:16.727019 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:17.002094 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:17.002516 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:17.003330 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:17.227682 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:17.337600 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:17.500550 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:17.501192 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:17.503022 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:17.727350 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:18.000412 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:18.005904 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:18.006485 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:18.160859 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:18.227263 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:18.502249 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:18.504090 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:18.505005 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:18.727275 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:18.960260 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:18.960291 2077724 retry.go:31] will retry after 3.045277304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:19.003924 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:19.004085 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:19.004527 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:19.227560 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:19.499436 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:19.500523 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:19.501276 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:19.727100 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:19.838063 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:19.999564 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:20.001160 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:20.002168 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:20.226923 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:20.501229 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:20.501584 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:20.502374 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:20.727380 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:21.007955 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:21.008079 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:21.008198 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:21.227036 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:21.499921 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:21.500094 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:21.502663 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:21.727527 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:21.999118 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:22.000578 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:22.005242 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:22.005869 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:22.227940 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:22.338340 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:22.501129 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:22.503052 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:22.503363 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:22.727150 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:22.824406 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:22.824436 2077724 retry.go:31] will retry after 3.710811743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:23.000163 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:23.000492 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:23.005359 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:23.227538 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:23.500060 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:23.500106 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:23.502446 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:23.727534 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:24.005172 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:24.005329 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:24.005694 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:24.249868 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:24.341216 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:24.500141 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:24.500390 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:24.501980 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:24.727036 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:25.003058 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:25.003561 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:25.008457 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:25.227789 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:25.499349 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:25.500869 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:25.501179 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:25.727814 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:26.001443 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:26.002202 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:26.011972 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:26.227084 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:26.500968 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:26.501534 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:26.501595 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:26.535855 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:26.726940 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:26.838457 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:27.004524 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:27.004702 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:27.005075 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:27.227979 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:27.324866 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:27.324896 2077724 retry.go:31] will retry after 10.387791324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:27.500850 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:27.501348 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:27.502267 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:27.727322 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:27.999390 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:28.000154 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:28.004067 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:28.227319 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:28.500346 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:28.500663 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:28.502257 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:28.727894 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:29.000636 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:29.001703 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:29.002757 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:29.227655 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:29.337946 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:29.499324 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:29.500414 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:29.501045 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:29.726780 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:29.999907 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:30.000082 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:30.003970 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:30.226949 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:30.501384 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:30.501454 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:30.502068 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:30.726781 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:30.999308 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:30.999884 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:31.001949 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:31.226732 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:31.500550 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:31.501185 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:31.501705 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:31.727517 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:31.837183 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:31.999247 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:32.000494 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:32.003303 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:32.227085 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:32.499106 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:32.501742 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:32.502124 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:32.726912 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:32.999758 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:32.999921 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:33.004449 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:33.227553 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:33.500346 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:33.500481 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:33.501416 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:33.727469 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:33.837227 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:33.999264 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:33.999481 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:34.002718 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:34.227625 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:34.498893 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:34.500940 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:34.501095 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:34.726955 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:35.000693 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:35.000865 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:35.003750 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:35.227522 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:35.500290 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:35.500377 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:35.502337 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:35.727223 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:35.837891 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:35.998714 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:36.007597 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:36.007727 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:36.227773 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:36.500166 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:36.501691 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:36.502703 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:36.742420 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:36.840417 2077724 node_ready.go:49] node "addons-897172" is "Ready"
	I1018 12:02:36.840443 2077724 node_ready.go:38] duration metric: took 40.006143267s for node "addons-897172" to be "Ready" ...
	I1018 12:02:36.840458 2077724 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:02:36.840524 2077724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:02:36.878264 2077724 api_server.go:72] duration metric: took 41.736800584s to wait for apiserver process to appear ...
	I1018 12:02:36.878291 2077724 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:02:36.878314 2077724 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:02:36.894278 2077724 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:02:36.896195 2077724 api_server.go:141] control plane version: v1.34.1
	I1018 12:02:36.896224 2077724 api_server.go:131] duration metric: took 17.924905ms to wait for apiserver health ...
	I1018 12:02:36.896234 2077724 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:02:36.906671 2077724 system_pods.go:59] 19 kube-system pods found
	I1018 12:02:36.906711 2077724 system_pods.go:61] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:36.906718 2077724 system_pods.go:61] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending
	I1018 12:02:36.906725 2077724 system_pods.go:61] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending
	I1018 12:02:36.906731 2077724 system_pods.go:61] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending
	I1018 12:02:36.906735 2077724 system_pods.go:61] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:36.906740 2077724 system_pods.go:61] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:36.906745 2077724 system_pods.go:61] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:36.906756 2077724 system_pods.go:61] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:36.906761 2077724 system_pods.go:61] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending
	I1018 12:02:36.906768 2077724 system_pods.go:61] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:36.906772 2077724 system_pods.go:61] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:36.906781 2077724 system_pods.go:61] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending
	I1018 12:02:36.906785 2077724 system_pods.go:61] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending
	I1018 12:02:36.906798 2077724 system_pods.go:61] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending
	I1018 12:02:36.906805 2077724 system_pods.go:61] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:36.906811 2077724 system_pods.go:61] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending
	I1018 12:02:36.906823 2077724 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending
	I1018 12:02:36.906830 2077724 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:36.906836 2077724 system_pods.go:61] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:36.906846 2077724 system_pods.go:74] duration metric: took 10.605992ms to wait for pod list to return data ...
	I1018 12:02:36.906858 2077724 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:02:36.945271 2077724 default_sa.go:45] found service account: "default"
	I1018 12:02:36.945299 2077724 default_sa.go:55] duration metric: took 38.433528ms for default service account to be created ...
	I1018 12:02:36.945309 2077724 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:02:37.008400 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:37.008441 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:37.008448 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending
	I1018 12:02:37.008455 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending
	I1018 12:02:37.008460 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending
	I1018 12:02:37.008464 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:37.008470 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:37.008475 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:37.008484 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:37.008491 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending
	I1018 12:02:37.008494 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:37.008501 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:37.008509 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending
	I1018 12:02:37.008514 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending
	I1018 12:02:37.008518 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending
	I1018 12:02:37.008537 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:37.008542 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending
	I1018 12:02:37.008550 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.008565 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.008571 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:37.008589 2077724 retry.go:31] will retry after 303.021535ms: missing components: kube-dns
	I1018 12:02:37.009096 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:37.009194 2077724 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:02:37.009208 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:37.010349 2077724 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:02:37.010374 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:37.229536 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:37.319989 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:37.320029 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:37.320036 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending
	I1018 12:02:37.320043 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending
	I1018 12:02:37.320047 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending
	I1018 12:02:37.320051 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:37.320056 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:37.320066 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:37.320071 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:37.320075 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending
	I1018 12:02:37.320084 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:37.320089 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:37.320099 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:37.320104 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending
	I1018 12:02:37.320118 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:37.320124 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:37.320137 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:37.320144 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.320151 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.320162 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:37.320178 2077724 retry.go:31] will retry after 288.724433ms: missing components: kube-dns
	I1018 12:02:37.501350 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:37.501704 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:37.502108 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:37.631984 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:37.632024 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:37.632034 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:37.632043 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:37.632050 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:37.632055 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:37.632060 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:37.632069 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:37.632074 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:37.632086 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:37.632091 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:37.632096 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:37.632109 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:37.632116 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:37.632127 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:37.632133 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:37.632140 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:37.632146 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.632155 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.632165 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:37.632183 2077724 retry.go:31] will retry after 378.474191ms: missing components: kube-dns
	I1018 12:02:37.713450 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:37.727658 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:38.003258 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:38.003440 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:38.003601 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:38.015012 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:38.015053 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:38.015063 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:38.015072 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:38.015082 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:38.015090 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:38.015096 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:38.015112 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:38.015118 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:38.015126 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:38.015135 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:38.015140 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:38.015145 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:38.015153 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:38.015181 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:38.015190 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:38.015201 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:38.015208 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.015219 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.015226 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:38.015246 2077724 retry.go:31] will retry after 499.684215ms: missing components: kube-dns
	I1018 12:02:38.227339 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:38.501450 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:38.501592 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:38.502776 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:38.519711 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:38.519752 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:38.519762 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:38.519770 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:38.519777 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:38.519785 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:38.519796 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:38.519801 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:38.519812 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:38.519818 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:38.519830 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:38.519853 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:38.519862 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:38.519868 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:38.519878 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:38.519885 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:38.519895 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:38.519903 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.519916 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.519920 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Running
	I1018 12:02:38.519935 2077724 retry.go:31] will retry after 619.284345ms: missing components: kube-dns
	I1018 12:02:38.727083 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:39.003613 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:39.003781 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:39.004504 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:39.146577 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:39.146613 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Running
	I1018 12:02:39.146635 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:39.146643 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:39.146656 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:39.146671 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:39.146676 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:39.146681 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:39.146690 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:39.146697 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:39.146707 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:39.146712 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:39.146717 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:39.146734 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:39.146740 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:39.146748 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:39.146755 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:39.146764 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:39.146786 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:39.146799 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Running
	I1018 12:02:39.146809 2077724 system_pods.go:126] duration metric: took 2.201493858s to wait for k8s-apps to be running ...
	I1018 12:02:39.146821 2077724 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:02:39.146879 2077724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:02:39.227308 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:39.504835 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:39.504929 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:39.506327 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:39.511518 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.798025909s)
	W1018 12:02:39.511558 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:39.511575 2077724 retry.go:31] will retry after 18.950534275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:39.511614 2077724 system_svc.go:56] duration metric: took 364.788564ms WaitForService to wait for kubelet
	I1018 12:02:39.511630 2077724 kubeadm.go:586] duration metric: took 44.370171471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:02:39.511647 2077724 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:02:39.514514 2077724 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:02:39.514548 2077724 node_conditions.go:123] node cpu capacity is 2
	I1018 12:02:39.514560 2077724 node_conditions.go:105] duration metric: took 2.902686ms to run NodePressure ...
	I1018 12:02:39.514571 2077724 start.go:241] waiting for startup goroutines ...
	I1018 12:02:39.727772 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:40.004549 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:40.004656 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:40.005575 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:40.229608 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:40.503647 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:40.504154 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:40.504462 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:40.728307 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:41.002301 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:41.002609 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:41.005098 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:41.227219 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:41.502619 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:41.502890 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:41.502993 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:41.727347 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:42.007960 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:42.008259 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:42.008390 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:42.229016 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:42.500691 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:42.502766 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:42.504148 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:42.727862 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:43.001582 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:43.002088 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:43.004575 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:43.227069 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:43.504190 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:43.504389 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:43.504520 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:43.727906 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:44.002425 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:44.002801 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:44.003692 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:44.227724 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:44.502469 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:44.502711 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:44.503042 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:44.727307 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:45.009271 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:45.009546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:45.009665 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:45.255478 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:45.510914 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:45.511592 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:45.511952 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:45.727763 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:46.019229 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:46.019705 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:46.020121 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:46.227928 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:46.502839 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:46.504039 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:46.505970 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:46.728800 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:47.016437 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:47.016614 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:47.016786 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:47.228633 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:47.501692 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:47.502004 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:47.502929 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:47.726824 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:48.008737 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:48.008830 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:48.011706 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:48.227625 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:48.502341 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:48.502612 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:48.502979 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:48.727406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:49.005101 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:49.005281 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:49.005948 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:49.227945 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:49.502797 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:49.503179 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:49.503753 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:49.727698 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:50.005959 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:50.008611 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:50.010006 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:50.227406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:50.503206 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:50.503621 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:50.503741 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:50.727591 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:51.005124 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:51.006185 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:51.006980 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:51.227564 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:51.502857 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:51.503347 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:51.503712 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:51.728292 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:52.002458 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:52.002741 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:52.005117 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:52.227183 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:52.502138 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:52.502288 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:52.504034 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:52.727424 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:53.002195 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:53.002469 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:53.002799 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:53.227770 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:53.502607 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:53.502779 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:53.503126 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:53.727298 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:54.001504 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:54.001736 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:54.003819 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:54.227097 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:54.500992 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:54.501240 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:54.502879 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:54.727994 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:54.999639 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:55.004992 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:55.005253 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:55.235217 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:55.502543 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:55.502999 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:55.503315 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:55.727523 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:56.001762 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:56.002161 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:56.005447 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:56.227997 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:56.500005 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:56.501642 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:56.501823 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:56.727731 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:57.004094 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:57.004311 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:57.004386 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:57.227948 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:57.499162 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:57.501230 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:57.501373 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:57.728426 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:58.005796 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:58.006755 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:58.008079 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:58.228205 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:58.462310 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:58.503713 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:58.503800 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:58.504505 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:58.727458 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:59.005887 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:59.006304 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:59.006726 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:59.228181 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:59.503383 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:59.503939 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:59.505355 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:59.585804 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.123460058s)
	W1018 12:02:59.585900 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:59.585963 2077724 retry.go:31] will retry after 26.550505718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:59.728227 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:00.002395 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:00.002957 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:00.030451 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:00.266341 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:00.503206 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:00.504132 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:00.519382 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:00.727824 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:01.001718 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:01.002197 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:01.006422 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:01.228016 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:01.503357 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:01.505002 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:01.506196 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:01.727063 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:02.003772 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:02.004370 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:02.007484 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:02.228031 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:02.503799 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:02.504162 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:02.504231 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:02.727150 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:03.000278 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:03.003609 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:03.003623 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:03.227542 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:03.501852 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:03.502056 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:03.503375 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:03.728495 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:04.004220 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:04.005349 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:04.009220 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:04.229420 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:04.515378 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:04.516285 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:04.516414 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:04.727718 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:05.007245 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:05.007298 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:05.008724 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:05.230549 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:05.512261 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:05.512516 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:05.512963 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:05.727567 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:06.000401 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:06.001885 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:06.004015 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:06.226775 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:06.502287 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:06.503039 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:06.503971 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:06.727826 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:07.001841 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:07.002743 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:07.004936 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:07.227901 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:07.500888 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:07.501008 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:07.503391 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:07.727370 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:08.000708 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:08.001282 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:08.005672 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:08.228118 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:08.503072 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:08.503349 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:08.503515 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:08.727984 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:09.004766 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:09.004868 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:09.005664 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:09.227711 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:09.499542 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:09.503403 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:09.503728 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:09.728160 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:10.002553 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:10.005485 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:10.008937 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:10.227774 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:10.504831 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:10.505425 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:10.505585 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:10.728164 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:11.003271 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:11.004554 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:11.005767 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:11.228135 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:11.505201 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:11.505406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:11.505918 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:11.727447 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:12.007360 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:12.007931 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:12.008431 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:12.228144 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:12.500781 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:12.500916 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:12.501542 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:12.727774 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:12.999786 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:13.001442 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:13.005162 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:13.227730 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:13.500170 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:13.500421 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:13.502221 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:13.727605 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:14.005252 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:14.005848 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:14.006956 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:14.226728 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:14.501064 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:14.502593 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:14.502788 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:14.728228 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:15.010416 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:15.011700 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:15.011735 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:15.230765 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:15.501403 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:15.502306 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:15.503251 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:15.727033 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:16.001677 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:16.005509 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:16.006397 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:16.228449 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:16.503435 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:16.503587 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:16.504115 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:16.727214 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:17.003277 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:17.003776 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:17.006771 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:17.229143 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:17.501546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:17.501737 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:17.503759 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:17.728852 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:18.003620 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:18.004267 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:18.006697 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:18.227637 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:18.500739 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:18.502926 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:18.503265 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:18.727562 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:19.008585 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:19.008804 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:19.008872 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:19.228546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:19.499220 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:19.500568 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:19.501361 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:19.727169 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:20.079743 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:20.080280 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:20.080797 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:20.228036 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:20.500503 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:20.501312 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:20.504463 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:20.727707 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:21.037881 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:21.038019 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:21.038278 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:21.226976 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:21.499147 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:21.500613 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:21.502037 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:21.727211 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:22.006446 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:22.007168 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:22.007932 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:22.227932 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:22.499356 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:22.501516 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:22.501697 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:22.728083 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:23.004233 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:23.006471 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:23.006669 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:23.228237 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:23.502927 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:23.503281 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:23.503358 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:23.727592 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:23.999898 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:24.000687 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:24.005469 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:24.232525 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:24.501488 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:24.501852 2077724 kapi.go:107] duration metric: took 1m20.50511517s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:03:24.504922 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:24.727573 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:24.999720 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:25.003128 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:25.227715 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:25.499983 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:25.504283 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:25.726627 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:25.999879 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:26.003088 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:26.137354 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:03:26.227338 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:26.501375 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:26.512655 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:26.729027 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:27.000474 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:27.004984 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:27.227487 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:27.301074 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.163678197s)
	W1018 12:03:27.301125 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:03:27.301235 2077724 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:03:27.499118 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:27.501287 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:27.727788 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:28.004575 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:28.008417 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:28.229655 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:28.501242 2077724 kapi.go:107] duration metric: took 1m24.505309751s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:03:28.509668 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:28.730516 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:29.003211 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:29.227992 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:29.502467 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:29.727668 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:30.004491 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:30.227463 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:30.504076 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:30.728572 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:31.009050 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:31.227148 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:31.503746 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:31.730453 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:32.005805 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:32.231507 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:32.501665 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:32.727634 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:33.004213 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:33.227250 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:33.504887 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:33.738406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:34.014732 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:34.233936 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:34.502538 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:34.727908 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:35.009631 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:35.228098 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:35.502367 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:35.727581 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:36.004156 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:36.236864 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:36.502160 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:36.727367 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:37.005023 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:37.227661 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:37.502624 2077724 kapi.go:107] duration metric: took 1m33.504001092s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:03:37.727580 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:38.229048 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:38.728075 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:39.227422 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:39.727896 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:40.227615 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:40.728417 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:41.229219 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:41.728138 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:42.248176 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:42.728872 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:43.227576 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:43.727480 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:44.227826 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:44.727314 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:45.239629 2077724 kapi.go:107] duration metric: took 1m40.01557586s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:03:45.242699 2077724 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-897172 cluster.
	I1018 12:03:45.245514 2077724 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:03:45.248471 2077724 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:03:45.253114 2077724 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner-rancher, ingress-dns, registry-creds, cloud-spanner, volcano, nvidia-device-plugin, metrics-server, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 12:03:45.259554 2077724 addons.go:514] duration metric: took 1m50.117016996s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner-rancher ingress-dns registry-creds cloud-spanner volcano nvidia-device-plugin metrics-server storage-provisioner yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 12:03:45.259652 2077724 start.go:246] waiting for cluster config update ...
	I1018 12:03:45.259698 2077724 start.go:255] writing updated cluster config ...
	I1018 12:03:45.260141 2077724 ssh_runner.go:195] Run: rm -f paused
	I1018 12:03:45.265506 2077724 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:03:45.338220 2077724 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-72vfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.351269 2077724 pod_ready.go:94] pod "coredns-66bc5c9577-72vfc" is "Ready"
	I1018 12:03:45.351306 2077724 pod_ready.go:86] duration metric: took 13.049932ms for pod "coredns-66bc5c9577-72vfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.358453 2077724 pod_ready.go:83] waiting for pod "etcd-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.371893 2077724 pod_ready.go:94] pod "etcd-addons-897172" is "Ready"
	I1018 12:03:45.371920 2077724 pod_ready.go:86] duration metric: took 13.427326ms for pod "etcd-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.382605 2077724 pod_ready.go:83] waiting for pod "kube-apiserver-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.393669 2077724 pod_ready.go:94] pod "kube-apiserver-addons-897172" is "Ready"
	I1018 12:03:45.393729 2077724 pod_ready.go:86] duration metric: took 11.092744ms for pod "kube-apiserver-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.398420 2077724 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.669316 2077724 pod_ready.go:94] pod "kube-controller-manager-addons-897172" is "Ready"
	I1018 12:03:45.669347 2077724 pod_ready.go:86] duration metric: took 270.88071ms for pod "kube-controller-manager-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.869697 2077724 pod_ready.go:83] waiting for pod "kube-proxy-5wvw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.269620 2077724 pod_ready.go:94] pod "kube-proxy-5wvw6" is "Ready"
	I1018 12:03:46.269702 2077724 pod_ready.go:86] duration metric: took 399.975412ms for pod "kube-proxy-5wvw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.470278 2077724 pod_ready.go:83] waiting for pod "kube-scheduler-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.869534 2077724 pod_ready.go:94] pod "kube-scheduler-addons-897172" is "Ready"
	I1018 12:03:46.869567 2077724 pod_ready.go:86] duration metric: took 399.256552ms for pod "kube-scheduler-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.869580 2077724 pod_ready.go:40] duration metric: took 1.604038848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:03:46.942667 2077724 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:03:46.945906 2077724 out.go:179] * Done! kubectl is now configured to use "addons-897172" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5499481ac9b50       1611cd07b61d5       9 minutes ago       Running             busybox                   0                   4a3ddf10376ea       busybox                                     default
	5a018d85eae0c       7b6a2fb1abbc9       10 minutes ago      Running             gadget                    0                   b6af92b83bc4f       gadget-6z8nf                                gadget
	a4c86ede4040f       21bfedf4686d5       10 minutes ago      Running             controller                0                   f9f3903a76c7c       ingress-nginx-controller-675c5ddd98-6zvh8   ingress-nginx
	efc2d51581a80       9a80c0c8eb61c       11 minutes ago      Exited              patch                     0                   120e97d197871       ingress-nginx-admission-patch-xmghg         ingress-nginx
	4c0137541ea54       34da3fe7b8efb       11 minutes ago      Running             minikube-ingress-dns      0                   8ba068ab69e71       kube-ingress-dns-minikube                   kube-system
	68473577ab424       9a80c0c8eb61c       11 minutes ago      Exited              create                    0                   06cb898a4cb4d       ingress-nginx-admission-create-kx9wc        ingress-nginx
	2d600e8cf22c7       ba04bb24b9575       11 minutes ago      Running             storage-provisioner       0                   27a91c68a6394       storage-provisioner                         kube-system
	f123a37f27029       138784d87c9c5       11 minutes ago      Running             coredns                   0                   1e37461c9699c       coredns-66bc5c9577-72vfc                    kube-system
	0f2a5a2b37744       05baa95f5142d       12 minutes ago      Running             kube-proxy                0                   21f4933f9660e       kube-proxy-5wvw6                            kube-system
	ae55d5b301167       b1a8c6f707935       12 minutes ago      Running             kindnet-cni               0                   d0f4193f79f9d       kindnet-zx4jd                               kube-system
	5f5b20ddb03b7       7eb2c6ff0c5a7       12 minutes ago      Running             kube-controller-manager   0                   d28db8ac80b8b       kube-controller-manager-addons-897172       kube-system
	a4a3de681e4e8       43911e833d64d       12 minutes ago      Running             kube-apiserver            0                   6a479a5201f27       kube-apiserver-addons-897172                kube-system
	7f058fe4c8a27       a1894772a478e       12 minutes ago      Running             etcd                      0                   46436fef2ffc5       etcd-addons-897172                          kube-system
	eed961508f62d       b5f57ec6b9867       12 minutes ago      Running             kube-scheduler            0                   91a66c4579b1f       kube-scheduler-addons-897172                kube-system
	
	
	==> containerd <==
	Oct 18 12:08:49 addons-897172 containerd[754]: time="2025-10-18T12:08:49.380191227Z" level=info msg="RemoveContainer for \"82ce7453ce9fa221bcd9827ecd39e8aacb028249a9a8c2f247b7b711391c8433\" returns successfully"
	Oct 18 12:08:49 addons-897172 containerd[754]: time="2025-10-18T12:08:49.380784951Z" level=error msg="ContainerStatus for \"82ce7453ce9fa221bcd9827ecd39e8aacb028249a9a8c2f247b7b711391c8433\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82ce7453ce9fa221bcd9827ecd39e8aacb028249a9a8c2f247b7b711391c8433\": not found"
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.389896924Z" level=info msg="StopPodSandbox for \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\""
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.397504592Z" level=info msg="TearDown network for sandbox \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\" successfully"
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.397541580Z" level=info msg="StopPodSandbox for \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\" returns successfully"
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.398083244Z" level=info msg="RemovePodSandbox for \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\""
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.398118459Z" level=info msg="Forcibly stopping sandbox \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\""
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.405707371Z" level=info msg="TearDown network for sandbox \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\" successfully"
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.411884568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 18 12:08:51 addons-897172 containerd[754]: time="2025-10-18T12:08:51.411964263Z" level=info msg="RemovePodSandbox \"29b9494905e05744af26db6b650234922d446640778133ffb12416e13baa626d\" returns successfully"
	Oct 18 12:09:04 addons-897172 containerd[754]: time="2025-10-18T12:09:04.916517541Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 18 12:09:04 addons-897172 containerd[754]: time="2025-10-18T12:09:04.918929870Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:09:05 addons-897172 containerd[754]: time="2025-10-18T12:09:05.044817737Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:09:05 addons-897172 containerd[754]: time="2025-10-18T12:09:05.405786607Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:09:05 addons-897172 containerd[754]: time="2025-10-18T12:09:05.405865399Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=21300"
	Oct 18 12:10:50 addons-897172 containerd[754]: time="2025-10-18T12:10:50.916629705Z" level=info msg="PullImage \"busybox:stable\""
	Oct 18 12:10:50 addons-897172 containerd[754]: time="2025-10-18T12:10:50.918904724Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:10:51 addons-897172 containerd[754]: time="2025-10-18T12:10:51.057670119Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:10:51 addons-897172 containerd[754]: time="2025-10-18T12:10:51.350924261Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:10:51 addons-897172 containerd[754]: time="2025-10-18T12:10:51.350976526Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=10979"
	Oct 18 12:11:57 addons-897172 containerd[754]: time="2025-10-18T12:11:57.917821341Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 18 12:11:57 addons-897172 containerd[754]: time="2025-10-18T12:11:57.920260966Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:11:58 addons-897172 containerd[754]: time="2025-10-18T12:11:58.046793206Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:11:58 addons-897172 containerd[754]: time="2025-10-18T12:11:58.453935130Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:11:58 addons-897172 containerd[754]: time="2025-10-18T12:11:58.454047061Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=21300"
	
	
	==> coredns [f123a37f27029ed2d0dd392b04368405c969d126e49fca266067c7e310ee6f94] <==
	[INFO] 10.244.0.19:37177 - 22478 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00173961s
	[INFO] 10.244.0.19:37177 - 49682 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000120243s
	[INFO] 10.244.0.19:37177 - 1763 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000150987s
	[INFO] 10.244.0.19:54520 - 609 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000132296s
	[INFO] 10.244.0.19:54520 - 855 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093011s
	[INFO] 10.244.0.19:55596 - 43458 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107452s
	[INFO] 10.244.0.19:55596 - 43623 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014814s
	[INFO] 10.244.0.19:34831 - 7198 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115s
	[INFO] 10.244.0.19:34831 - 7618 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000131648s
	[INFO] 10.244.0.19:38423 - 50624 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001058765s
	[INFO] 10.244.0.19:38423 - 50204 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00144571s
	[INFO] 10.244.0.19:42693 - 42496 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139041s
	[INFO] 10.244.0.19:42693 - 42900 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000225915s
	[INFO] 10.244.0.25:57771 - 30763 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151964s
	[INFO] 10.244.0.25:53955 - 25291 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153817s
	[INFO] 10.244.0.25:36816 - 301 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156665s
	[INFO] 10.244.0.25:48027 - 62261 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152423s
	[INFO] 10.244.0.25:50850 - 2023 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150348s
	[INFO] 10.244.0.25:33129 - 55957 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134036s
	[INFO] 10.244.0.25:35253 - 51648 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001863587s
	[INFO] 10.244.0.25:41054 - 43197 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001946046s
	[INFO] 10.244.0.25:32813 - 30229 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00142852s
	[INFO] 10.244.0.25:35445 - 2635 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001811879s
	[INFO] 10.244.0.29:50482 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000198017s
	[INFO] 10.244.0.29:48743 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127635s
	
	
	==> describe nodes <==
	Name:               addons-897172
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-897172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-897172
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_01_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-897172
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:01:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-897172
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:14:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:12:04 +0000   Sat, 18 Oct 2025 12:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:12:04 +0000   Sat, 18 Oct 2025 12:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:12:04 +0000   Sat, 18 Oct 2025 12:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:12:04 +0000   Sat, 18 Oct 2025 12:02:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-897172
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4f3b39b2-3519-409f-9958-4d7fb9c61252
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m2s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m56s
	  gadget                      gadget-6z8nf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6zvh8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-72vfc                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-897172                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-zx4jd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-897172                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-897172        200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5wvw6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-897172                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-897172 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-897172 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-897172 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-897172 event: Registered Node addons-897172 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-897172 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 11:37] overlayfs: idmapped layers are currently not supported
	[Oct18 11:38] overlayfs: idmapped layers are currently not supported
	[Oct18 11:40] overlayfs: idmapped layers are currently not supported
	[Oct18 11:42] overlayfs: idmapped layers are currently not supported
	[Oct18 11:43] overlayfs: idmapped layers are currently not supported
	[ +44.292171] overlayfs: idmapped layers are currently not supported
	[  +9.552091] overlayfs: idmapped layers are currently not supported
	[Oct18 11:44] overlayfs: idmapped layers are currently not supported
	[Oct18 11:45] overlayfs: idmapped layers are currently not supported
	[Oct18 11:47] overlayfs: idmapped layers are currently not supported
	[ +55.826989] overlayfs: idmapped layers are currently not supported
	[Oct18 11:48] overlayfs: idmapped layers are currently not supported
	[Oct18 11:49] overlayfs: idmapped layers are currently not supported
	[Oct18 11:50] overlayfs: idmapped layers are currently not supported
	[Oct18 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.885672] overlayfs: idmapped layers are currently not supported
	[ +14.381354] overlayfs: idmapped layers are currently not supported
	[Oct18 11:52] overlayfs: idmapped layers are currently not supported
	[Oct18 11:53] overlayfs: idmapped layers are currently not supported
	[Oct18 11:54] overlayfs: idmapped layers are currently not supported
	[Oct18 11:55] overlayfs: idmapped layers are currently not supported
	[ +48.139503] overlayfs: idmapped layers are currently not supported
	[Oct18 11:56] overlayfs: idmapped layers are currently not supported
	[Oct18 11:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:00] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [7f058fe4c8a27bae83a3121872d9020f0b81bb2a961f4d1d3865631f9eb1cb98] <==
	{"level":"warn","ts":"2025-10-18T12:01:45.827272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.843184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.867405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.921253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.957774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:46.020579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:46.183723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:04.903202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:04.925269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.079356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.114347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.147588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.173656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.239520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.269796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.297409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.322343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.342374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.368579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.388390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.403048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.420470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40092","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:11:44.511563Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":2356}
	{"level":"info","ts":"2025-10-18T12:11:44.591930Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":2356,"took":"79.529418ms","hash":1288955804,"current-db-size-bytes":10285056,"current-db-size":"10 MB","current-db-size-in-use-bytes":2691072,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2025-10-18T12:11:44.591985Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1288955804,"revision":2356,"compact-revision":-1}
	
	
	==> kernel <==
	 12:14:06 up 13:56,  0 user,  load average: 0.08, 0.63, 1.70
	Linux addons-897172 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ae55d5b3011674645856125c077c0b37c32b369b9d48901bc0f2b10e818a5d03] <==
	I1018 12:12:06.515176       1 main.go:301] handling current node
	I1018 12:12:16.515266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:12:16.515297       1 main.go:301] handling current node
	I1018 12:12:26.516879       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:12:26.516921       1 main.go:301] handling current node
	I1018 12:12:36.521462       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:12:36.521498       1 main.go:301] handling current node
	I1018 12:12:46.523212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:12:46.523247       1 main.go:301] handling current node
	I1018 12:12:56.515985       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:12:56.516091       1 main.go:301] handling current node
	I1018 12:13:06.515782       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:13:06.515820       1 main.go:301] handling current node
	I1018 12:13:16.515975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:13:16.516011       1 main.go:301] handling current node
	I1018 12:13:26.516011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:13:26.516109       1 main.go:301] handling current node
	I1018 12:13:36.515876       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:13:36.515915       1 main.go:301] handling current node
	I1018 12:13:46.515951       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:13:46.515986       1 main.go:301] handling current node
	I1018 12:13:56.516363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:13:56.516409       1 main.go:301] handling current node
	I1018 12:14:06.519920       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:14:06.519957       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4a3de681e4e8ff78c7f0626b2e00e1dca908b684158845b5a0598ddecd97b44] <==
	W1018 12:04:18.617150       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1018 12:04:18.752066       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1018 12:04:36.296356       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33584: use of closed network connection
	E1018 12:04:36.552640       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33612: use of closed network connection
	E1018 12:04:36.688215       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33634: use of closed network connection
	I1018 12:04:46.787254       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.253.206"}
	I1018 12:05:38.581797       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1018 12:05:40.813963       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1018 12:05:48.867172       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 12:05:57.102952       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.102999       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.126737       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.128261       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.139026       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.139069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.164107       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.164157       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.193879       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.195673       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1018 12:05:58.127775       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1018 12:05:58.194341       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1018 12:05:58.306383       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1018 12:06:04.563228       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 12:06:04.832121       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.67.8"}
	I1018 12:11:47.099444       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5f5b20ddb03b78b97b34dae7af991cacd7a4814b2e47d6f00498550e3a948b41] <==
	E1018 12:13:17.659978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:30.311276       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:30.312456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:35.811104       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:35.812906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:40.161824       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:40.163269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:45.040467       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:45.042149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:47.125714       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:47.126934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:47.888127       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:47.889454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:48.440153       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:48.441199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:54.698299       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:54.699494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:13:58.392060       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:13:58.393144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:14:03.236474       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:14:03.237637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:14:04.691400       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:14:04.692452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:14:05.938410       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:14:05.939591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0f2a5a2b37744d869e19d0c2f143c407ed44b5af5d0e9ff2e2e66ed49f58124f] <==
	I1018 12:01:56.278233       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:01:56.385311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:01:56.485817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:01:56.485854       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:01:56.485931       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:01:56.520377       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:01:56.520428       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:01:56.612820       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:01:56.613137       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:01:56.613152       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:01:56.614559       1 config.go:200] "Starting service config controller"
	I1018 12:01:56.614568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:01:56.614584       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:01:56.614588       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:01:56.614598       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:01:56.614608       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:01:56.618429       1 config.go:309] "Starting node config controller"
	I1018 12:01:56.618452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:01:56.618476       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:01:56.715939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:01:56.715961       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:01:56.715931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [eed961508f62df2082fd87bc190e9e45a0d98f76c26c34aabd2e3a5140f8463e] <==
	E1018 12:01:47.170669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:01:47.170735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:01:47.172008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:01:47.172243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:01:47.172299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:01:47.172335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:01:47.172370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:01:47.175927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:01:47.189630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:01:47.189827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:01:47.986084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:01:48.034367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:01:48.046700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:01:48.139900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:01:48.160417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:01:48.192851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:01:48.270932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:01:48.343558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:01:48.345092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:01:48.356568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:01:48.380141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:01:48.433685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:01:48.475010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:01:48.484565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1018 12:01:49.942945       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:11:58 addons-897172 kubelet[1481]: E1018 12:11:58.454322    1481 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 12:11:58 addons-897172 kubelet[1481]: E1018 12:11:58.454402    1481 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(69e78953-0244-4b1b-b6b5-2de0b5385adf): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:11:58 addons-897172 kubelet[1481]: E1018 12:11:58.454439    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:11:58 addons-897172 kubelet[1481]: E1018 12:11:58.916481    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:12:06 addons-897172 kubelet[1481]: I1018 12:12:06.915963    1481 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:12:09 addons-897172 kubelet[1481]: E1018 12:12:09.917635    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:12:09 addons-897172 kubelet[1481]: E1018 12:12:09.918941    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:12:20 addons-897172 kubelet[1481]: E1018 12:12:20.916429    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:12:20 addons-897172 kubelet[1481]: E1018 12:12:20.916778    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:12:32 addons-897172 kubelet[1481]: E1018 12:12:32.916415    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:12:34 addons-897172 kubelet[1481]: E1018 12:12:34.916527    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:12:45 addons-897172 kubelet[1481]: E1018 12:12:45.918162    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:12:46 addons-897172 kubelet[1481]: E1018 12:12:46.915799    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:12:59 addons-897172 kubelet[1481]: E1018 12:12:59.916313    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:13:01 addons-897172 kubelet[1481]: E1018 12:13:01.916117    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:13:13 addons-897172 kubelet[1481]: E1018 12:13:13.915976    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:13:16 addons-897172 kubelet[1481]: E1018 12:13:16.916304    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:13:25 addons-897172 kubelet[1481]: E1018 12:13:25.917310    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:13:31 addons-897172 kubelet[1481]: E1018 12:13:31.916278    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:13:33 addons-897172 kubelet[1481]: I1018 12:13:33.916752    1481 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:13:40 addons-897172 kubelet[1481]: E1018 12:13:40.915989    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:13:42 addons-897172 kubelet[1481]: E1018 12:13:42.915985    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:13:51 addons-897172 kubelet[1481]: E1018 12:13:51.916179    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:13:54 addons-897172 kubelet[1481]: E1018 12:13:54.916758    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:14:04 addons-897172 kubelet[1481]: E1018 12:14:04.915985    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	
	
	==> storage-provisioner [2d600e8cf22c791fa7ffc6ec034cffef3fa5102dfd75225ce6fa10114b83e94b] <==
	W1018 12:13:41.833897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:43.836788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:43.840880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:45.845195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:45.852934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:47.855907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:47.860948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:49.866358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:49.877753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:51.881605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:51.886638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:53.890325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:53.896659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:55.900267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:55.904772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:57.907814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:57.914348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:59.923815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:13:59.929211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:14:01.932219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:14:01.938333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:14:03.941936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:14:03.946184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:14:05.950812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:14:05.959567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-897172 -n addons-897172
helpers_test.go:269: (dbg) Run:  kubectl --context addons-897172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-897172 describe pod nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-897172 describe pod nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg: exit status 1 (101.477743ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-897172/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:06:04 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:  10.244.0.34
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sf6x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2sf6x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m3s                   default-scheduler  Successfully assigned default/nginx to addons-897172
	  Warning  Failed     6m36s (x3 over 7m46s)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m3s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5m2s (x2 over 8m2s)    kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     5m2s (x5 over 8m2s)    kubelet            Error: ErrImagePull
	  Warning  Failed     2m50s (x20 over 8m2s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2m35s (x21 over 8m2s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-897172/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:05:14 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvdh2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-kvdh2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  8m53s                   default-scheduler  Successfully assigned default/test-local-path to addons-897172
	  Normal   Pulling    6m6s (x5 over 8m52s)    kubelet            Pulling image "busybox:stable"
	  Warning  Failed     6m5s (x5 over 8m52s)    kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m5s (x5 over 8m52s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    3m47s (x21 over 8m52s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     3m47s (x21 over 8m52s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kx9wc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xmghg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-897172 describe pod nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 addons disable ingress-dns --alsologtostderr -v=1: (1.327421541s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 addons disable ingress --alsologtostderr -v=1: (7.787392174s)
--- FAIL: TestAddons/parallel/Ingress (492.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (230.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-897172 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-897172 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [197fd552-3e3a-410b-910a-4e3b17e76bd5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:337: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:962: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:962: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-897172 -n addons-897172
addons_test.go:962: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-10-18 12:08:15.258636968 +0000 UTC m=+454.516275077
addons_test.go:962: (dbg) Run:  kubectl --context addons-897172 describe po test-local-path -n default
addons_test.go:962: (dbg) kubectl --context addons-897172 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-897172/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:05:14 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.31
IPs:
IP:  10.244.0.31
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvdh2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-kvdh2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  3m1s               default-scheduler  Successfully assigned default/test-local-path to addons-897172
Normal   BackOff    26s (x10 over 3m)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     26s (x10 over 3m)  kubelet            Error: ImagePullBackOff
Normal   Pulling    14s (x5 over 3m)   kubelet            Pulling image "busybox:stable"
Warning  Failed     13s (x5 over 3m)   kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     13s (x5 over 3m)   kubelet            Error: ErrImagePull
addons_test.go:962: (dbg) Run:  kubectl --context addons-897172 logs test-local-path -n default
addons_test.go:962: (dbg) Non-zero exit: kubectl --context addons-897172 logs test-local-path -n default: exit status 1 (103.319805ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:962: kubectl --context addons-897172 logs test-local-path -n default: exit status 1
addons_test.go:963: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-897172
helpers_test.go:243: (dbg) docker inspect addons-897172:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca",
	        "Created": "2025-10-18T12:01:21.360855514Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2078122,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:01:21.422405524Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/hosts",
	        "LogPath": "/var/lib/docker/containers/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca/e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca-json.log",
	        "Name": "/addons-897172",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-897172:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-897172",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e79e9ade5524e8c802c11ad371fb1dbb6226665469d4bd12a08f8dd1b85b98ca",
	                "LowerDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad-init/diff:/var/lib/docker/overlay2/647b2423f8222638985dff90791465004ec84c7fd61ca3830bba92bce09f80ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2163bfad29fc781452b39014b26b7af0012f5812282fd17409570d2d8604ffad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-897172",
	                "Source": "/var/lib/docker/volumes/addons-897172/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-897172",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-897172",
	                "name.minikube.sigs.k8s.io": "addons-897172",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7258c9852136886c5b8615dcf21b68c25fa67387a4a5f96112e0385d16ef7171",
	            "SandboxKey": "/var/run/docker/netns/7258c9852136",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35694"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35695"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35698"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35696"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35697"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-897172": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:79:be:e3:5f:8e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cd7773662be59cbfd50d24e7cd88733181b943a056c516a5ec6159cddc5c286",
	                    "EndpointID": "12138909070b7779605f90c0de940f5d02d7193a7f88f3958df6260c3dd6b0b4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-897172",
	                        "e79e9ade5524"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-897172 -n addons-897172
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 logs -n 25: (1.321107101s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                      ARGS                                                                                                                                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-038567                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-038567   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ start   │ -o=json --download-only -p download-only-110073 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                          │ download-only-110073   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                          │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ delete  │ -p download-only-110073                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-110073   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ delete  │ -p download-only-038567                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-038567   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ delete  │ -p download-only-110073                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ download-only-110073   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ start   │ --download-only -p download-docker-697075 --alsologtostderr --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-697075 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ delete  │ -p download-docker-697075                                                                                                                                                                                                                                                                                                                                                                                                                                                      │ download-docker-697075 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ start   │ --download-only -p binary-mirror-441377 --alsologtostderr --binary-mirror http://127.0.0.1:43257 --driver=docker  --container-runtime=containerd                                                                                                                                                                                                                                                                                                                               │ binary-mirror-441377   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ delete  │ -p binary-mirror-441377                                                                                                                                                                                                                                                                                                                                                                                                                                                        │ binary-mirror-441377   │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ addons  │ enable dashboard -p addons-897172                                                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ addons  │ disable dashboard -p addons-897172                                                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ start   │ -p addons-897172 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:03 UTC │
	│ addons  │ addons-897172 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons  │ addons-897172 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons  │ enable headlamp -p addons-897172 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons  │ addons-897172 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:05 UTC │
	│ ip      │ addons-897172 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons  │ addons-897172 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-897172          │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:06 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:00:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:00:54.367432 2077724 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:00:54.368114 2077724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:54.368130 2077724 out.go:374] Setting ErrFile to fd 2...
	I1018 12:00:54.368136 2077724 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:54.368684 2077724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:00:54.369208 2077724 out.go:368] Setting JSON to false
	I1018 12:00:54.370052 2077724 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":49402,"bootTime":1760739453,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:00:54.370161 2077724 start.go:141] virtualization:  
	I1018 12:00:54.373464 2077724 out.go:179] * [addons-897172] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:00:54.377392 2077724 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:00:54.377465 2077724 notify.go:220] Checking for updates...
	I1018 12:00:54.383184 2077724 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:00:54.385984 2077724 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:00:54.388882 2077724 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:00:54.391737 2077724 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:00:54.394572 2077724 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:00:54.397755 2077724 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:00:54.427244 2077724 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:00:54.427379 2077724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:54.485624 2077724 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:00:54.476724944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:54.485733 2077724 docker.go:318] overlay module found
	I1018 12:00:54.488785 2077724 out.go:179] * Using the docker driver based on user configuration
	I1018 12:00:54.491498 2077724 start.go:305] selected driver: docker
	I1018 12:00:54.491514 2077724 start.go:925] validating driver "docker" against <nil>
	I1018 12:00:54.491528 2077724 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:00:54.492226 2077724 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:54.548122 2077724 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 12:00:54.539431163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:54.548285 2077724 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:00:54.548516 2077724 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:00:54.551343 2077724 out.go:179] * Using Docker driver with root privileges
	I1018 12:00:54.555054 2077724 cni.go:84] Creating CNI manager for ""
	I1018 12:00:54.555123 2077724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:00:54.555136 2077724 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:00:54.555213 2077724 start.go:349] cluster config:
	{Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:54.558350 2077724 out.go:179] * Starting "addons-897172" primary control-plane node in "addons-897172" cluster
	I1018 12:00:54.561132 2077724 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1018 12:00:54.564032 2077724 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:00:54.566883 2077724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:00:54.566908 2077724 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:00:54.566928 2077724 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1018 12:00:54.566938 2077724 cache.go:58] Caching tarball of preloaded images
	I1018 12:00:54.567023 2077724 preload.go:233] Found /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 12:00:54.567033 2077724 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1018 12:00:54.567357 2077724 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/config.json ...
	I1018 12:00:54.567389 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/config.json: {Name:mkafcabb28ec6f80973f821bd3a3501eb808e73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:54.583536 2077724 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:00:54.583654 2077724 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 12:00:54.583679 2077724 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 12:00:54.583687 2077724 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 12:00:54.583695 2077724 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 12:00:54.583707 2077724 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 12:01:12.760461 2077724 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 12:01:12.760502 2077724 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:01:12.760547 2077724 start.go:360] acquireMachinesLock for addons-897172: {Name:mk3faea9d4c04d1ecb221033ca1da8db432fda2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:01:12.760674 2077724 start.go:364] duration metric: took 103.645µs to acquireMachinesLock for "addons-897172"
	I1018 12:01:12.760703 2077724 start.go:93] Provisioning new machine with config: &{Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1018 12:01:12.760780 2077724 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:01:12.764181 2077724 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 12:01:12.764424 2077724 start.go:159] libmachine.API.Create for "addons-897172" (driver="docker")
	I1018 12:01:12.764458 2077724 client.go:168] LocalClient.Create starting
	I1018 12:01:12.764581 2077724 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem
	I1018 12:01:12.843489 2077724 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem
	I1018 12:01:14.748777 2077724 cli_runner.go:164] Run: docker network inspect addons-897172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:01:14.765481 2077724 cli_runner.go:211] docker network inspect addons-897172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:01:14.765564 2077724 network_create.go:284] running [docker network inspect addons-897172] to gather additional debugging logs...
	I1018 12:01:14.765587 2077724 cli_runner.go:164] Run: docker network inspect addons-897172
	W1018 12:01:14.780226 2077724 cli_runner.go:211] docker network inspect addons-897172 returned with exit code 1
	I1018 12:01:14.780265 2077724 network_create.go:287] error running [docker network inspect addons-897172]: docker network inspect addons-897172: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-897172 not found
	I1018 12:01:14.780278 2077724 network_create.go:289] output of [docker network inspect addons-897172]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-897172 not found
	
	** /stderr **
	I1018 12:01:14.780373 2077724 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:01:14.796588 2077724 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197bce0}
	I1018 12:01:14.796638 2077724 network_create.go:124] attempt to create docker network addons-897172 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:01:14.796700 2077724 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-897172 addons-897172
	I1018 12:01:14.851490 2077724 network_create.go:108] docker network addons-897172 192.168.49.0/24 created
	I1018 12:01:14.851536 2077724 kic.go:121] calculated static IP "192.168.49.2" for the "addons-897172" container
	I1018 12:01:14.851611 2077724 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:01:14.866418 2077724 cli_runner.go:164] Run: docker volume create addons-897172 --label name.minikube.sigs.k8s.io=addons-897172 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:01:14.883604 2077724 oci.go:103] Successfully created a docker volume addons-897172
	I1018 12:01:14.883714 2077724 cli_runner.go:164] Run: docker run --rm --name addons-897172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-897172 --entrypoint /usr/bin/test -v addons-897172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:01:17.017557 2077724 cli_runner.go:217] Completed: docker run --rm --name addons-897172-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-897172 --entrypoint /usr/bin/test -v addons-897172:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.133802704s)
	I1018 12:01:17.017613 2077724 oci.go:107] Successfully prepared a docker volume addons-897172
	I1018 12:01:17.017643 2077724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:01:17.017666 2077724 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:01:17.017730 2077724 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-897172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:01:21.286655 2077724 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-897172:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.268863654s)
	I1018 12:01:21.286688 2077724 kic.go:203] duration metric: took 4.269019499s to extract preloaded images to volume ...
	W1018 12:01:21.286845 2077724 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:01:21.286955 2077724 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:01:21.345643 2077724 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-897172 --name addons-897172 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-897172 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-897172 --network addons-897172 --ip 192.168.49.2 --volume addons-897172:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:01:21.649683 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Running}}
	I1018 12:01:21.668062 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:21.688316 2077724 cli_runner.go:164] Run: docker exec addons-897172 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:01:21.740941 2077724 oci.go:144] the created container "addons-897172" has a running status.
	I1018 12:01:21.740969 2077724 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa...
	I1018 12:01:22.849770 2077724 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:01:22.868302 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:22.884155 2077724 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:01:22.884176 2077724 kic_runner.go:114] Args: [docker exec --privileged addons-897172 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:01:22.921385 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:22.939317 2077724 machine.go:93] provisionDockerMachine start ...
	I1018 12:01:22.939422 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:22.955189 2077724 main.go:141] libmachine: Using SSH client type: native
	I1018 12:01:22.955511 2077724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35694 <nil> <nil>}
	I1018 12:01:22.955526 2077724 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:01:22.956181 2077724 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38028->127.0.0.1:35694: read: connection reset by peer
	I1018 12:01:26.111381 2077724 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-897172
	
	I1018 12:01:26.111408 2077724 ubuntu.go:182] provisioning hostname "addons-897172"
	I1018 12:01:26.111470 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.128279 2077724 main.go:141] libmachine: Using SSH client type: native
	I1018 12:01:26.128584 2077724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35694 <nil> <nil>}
	I1018 12:01:26.128600 2077724 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-897172 && echo "addons-897172" | sudo tee /etc/hostname
	I1018 12:01:26.288230 2077724 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-897172
	
	I1018 12:01:26.288352 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.305202 2077724 main.go:141] libmachine: Using SSH client type: native
	I1018 12:01:26.305506 2077724 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35694 <nil> <nil>}
	I1018 12:01:26.305527 2077724 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-897172' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-897172/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-897172' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:01:26.451923 2077724 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:01:26.451951 2077724 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-2075029/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-2075029/.minikube}
	I1018 12:01:26.451977 2077724 ubuntu.go:190] setting up certificates
	I1018 12:01:26.451991 2077724 provision.go:84] configureAuth start
	I1018 12:01:26.452052 2077724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-897172
	I1018 12:01:26.469410 2077724 provision.go:143] copyHostCerts
	I1018 12:01:26.469496 2077724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem (1078 bytes)
	I1018 12:01:26.469630 2077724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem (1123 bytes)
	I1018 12:01:26.469689 2077724 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem (1675 bytes)
	I1018 12:01:26.469740 2077724 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem org=jenkins.addons-897172 san=[127.0.0.1 192.168.49.2 addons-897172 localhost minikube]
	I1018 12:01:26.659179 2077724 provision.go:177] copyRemoteCerts
	I1018 12:01:26.659290 2077724 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:01:26.659362 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.676340 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:26.779062 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:01:26.795497 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 12:01:26.812128 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:01:26.828758 2077724 provision.go:87] duration metric: took 376.741174ms to configureAuth
	I1018 12:01:26.828825 2077724 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:01:26.829047 2077724 config.go:182] Loaded profile config "addons-897172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:01:26.829062 2077724 machine.go:96] duration metric: took 3.889721187s to provisionDockerMachine
	I1018 12:01:26.829069 2077724 client.go:171] duration metric: took 14.064604819s to LocalClient.Create
	I1018 12:01:26.829101 2077724 start.go:167] duration metric: took 14.064678836s to libmachine.API.Create "addons-897172"
	I1018 12:01:26.829116 2077724 start.go:293] postStartSetup for "addons-897172" (driver="docker")
	I1018 12:01:26.829125 2077724 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:01:26.829191 2077724 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:01:26.829242 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.845537 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:26.947715 2077724 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:01:26.951011 2077724 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:01:26.951039 2077724 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:01:26.951049 2077724 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/addons for local assets ...
	I1018 12:01:26.951114 2077724 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/files for local assets ...
	I1018 12:01:26.951136 2077724 start.go:296] duration metric: took 122.014181ms for postStartSetup
	I1018 12:01:26.951446 2077724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-897172
	I1018 12:01:26.968927 2077724 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/config.json ...
	I1018 12:01:26.969212 2077724 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:01:26.969262 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:26.985225 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:27.085559 2077724 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:01:27.090769 2077724 start.go:128] duration metric: took 14.329972845s to createHost
	I1018 12:01:27.090794 2077724 start.go:83] releasing machines lock for "addons-897172", held for 14.330107021s
	I1018 12:01:27.090866 2077724 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-897172
	I1018 12:01:27.109332 2077724 ssh_runner.go:195] Run: cat /version.json
	I1018 12:01:27.109381 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:27.109411 2077724 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:01:27.109469 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:27.129197 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:27.146686 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:27.231561 2077724 ssh_runner.go:195] Run: systemctl --version
	I1018 12:01:27.322256 2077724 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:01:27.326495 2077724 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:01:27.326570 2077724 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:01:27.354793 2077724 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:01:27.354861 2077724 start.go:495] detecting cgroup driver to use...
	I1018 12:01:27.354906 2077724 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:01:27.354985 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1018 12:01:27.371087 2077724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:01:27.383685 2077724 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:01:27.383767 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:01:27.401047 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:01:27.419011 2077724 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:01:27.529680 2077724 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:01:27.652118 2077724 docker.go:234] disabling docker service ...
	I1018 12:01:27.652240 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:01:27.673599 2077724 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:01:27.686866 2077724 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:01:27.807668 2077724 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:01:27.920948 2077724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:01:27.934142 2077724 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:01:27.948534 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:01:27.957423 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:01:27.966616 2077724 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:01:27.966731 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:01:27.975832 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:01:27.985097 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:01:27.993939 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:01:28.005982 2077724 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:01:28.015440 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:01:28.025181 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:01:28.034460 2077724 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:01:28.043710 2077724 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:01:28.051750 2077724 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:01:28.059497 2077724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:01:28.173162 2077724 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:01:28.316764 2077724 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1018 12:01:28.316853 2077724 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1018 12:01:28.320825 2077724 start.go:563] Will wait 60s for crictl version
	I1018 12:01:28.320889 2077724 ssh_runner.go:195] Run: which crictl
	I1018 12:01:28.324594 2077724 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:01:28.349583 2077724 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1018 12:01:28.349663 2077724 ssh_runner.go:195] Run: containerd --version
	I1018 12:01:28.376134 2077724 ssh_runner.go:195] Run: containerd --version
	I1018 12:01:28.403599 2077724 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1018 12:01:28.406585 2077724 cli_runner.go:164] Run: docker network inspect addons-897172 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:01:28.422110 2077724 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:01:28.425820 2077724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:01:28.435586 2077724 kubeadm.go:883] updating cluster {Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:01:28.435720 2077724 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:01:28.435788 2077724 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:01:28.464197 2077724 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:01:28.464219 2077724 containerd.go:534] Images already preloaded, skipping extraction
	I1018 12:01:28.464279 2077724 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:01:28.489338 2077724 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:01:28.489364 2077724 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:01:28.489372 2077724 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1018 12:01:28.489460 2077724 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-897172 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:01:28.489530 2077724 ssh_runner.go:195] Run: sudo crictl info
	I1018 12:01:28.519199 2077724 cni.go:84] Creating CNI manager for ""
	I1018 12:01:28.519225 2077724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:01:28.519243 2077724 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:01:28.519266 2077724 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-897172 NodeName:addons-897172 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:01:28.519380 2077724 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-897172"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:01:28.519447 2077724 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:01:28.527048 2077724 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:01:28.527172 2077724 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:01:28.534949 2077724 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1018 12:01:28.547675 2077724 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:01:28.561182 2077724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1018 12:01:28.574426 2077724 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:01:28.577899 2077724 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:01:28.587519 2077724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:01:28.693533 2077724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:01:28.708498 2077724 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172 for IP: 192.168.49.2
	I1018 12:01:28.708516 2077724 certs.go:195] generating shared ca certs ...
	I1018 12:01:28.708533 2077724 certs.go:227] acquiring lock for ca certs: {Name:mkb3a5ce8c0a7d3b9a246d80f0747d48f33f9661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:28.708659 2077724 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key
	I1018 12:01:29.318591 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt ...
	I1018 12:01:29.318624 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt: {Name:mk234a1f1a44ab06efce70f0dc418f81fd52f0e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.318850 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key ...
	I1018 12:01:29.318868 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key: {Name:mka82413e87eae9641ba66292e212613c5c4f977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.318963 2077724 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key
	I1018 12:01:29.775477 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt ...
	I1018 12:01:29.775511 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt: {Name:mke83871c09efb42f9667eaae56a4dff5477cb8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.775707 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key ...
	I1018 12:01:29.775721 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key: {Name:mka883ece760b66f8a5b38807848cff872768cdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.775809 2077724 certs.go:257] generating profile certs ...
	I1018 12:01:29.775886 2077724 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.key
	I1018 12:01:29.775904 2077724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt with IP's: []
	I1018 12:01:29.978082 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt ...
	I1018 12:01:29.978113 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: {Name:mkd275e345beeb52a9d8089878d464409925c9ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.978299 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.key ...
	I1018 12:01:29.978312 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.key: {Name:mkb81e10ae3bcee03ab6c71bd6bd6256321bd770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:29.978398 2077724 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a
	I1018 12:01:29.978417 2077724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:01:30.446799 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a ...
	I1018 12:01:30.446832 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a: {Name:mk80d4e860a9a1031216fdb6a6e05fc29213cafd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.447021 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a ...
	I1018 12:01:30.447037 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a: {Name:mk6b8d83060337b7c23a69c8113d845615e2f56a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.447125 2077724 certs.go:382] copying /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt.eaa5417a -> /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt
	I1018 12:01:30.447204 2077724 certs.go:386] copying /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key.eaa5417a -> /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key
	I1018 12:01:30.447263 2077724 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key
	I1018 12:01:30.447284 2077724 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt with IP's: []
	I1018 12:01:30.649588 2077724 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt ...
	I1018 12:01:30.649619 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt: {Name:mk976758f26e7b84df2186a05da11e24b6ac783a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.649795 2077724 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key ...
	I1018 12:01:30.649808 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key: {Name:mk8ea210538a5a3b9d868226e9e956afca9f4cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:30.649993 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:01:30.650033 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:01:30.650065 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:01:30.650094 2077724 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem (1675 bytes)
	I1018 12:01:30.650741 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:01:30.669339 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:01:30.687955 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:01:30.705445 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:01:30.722717 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 12:01:30.739383 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:01:30.756114 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:01:30.772767 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:01:30.789076 2077724 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:01:30.805840 2077724 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:01:30.817999 2077724 ssh_runner.go:195] Run: openssl version
	I1018 12:01:30.824201 2077724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:01:30.832590 2077724 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:01:30.836438 2077724 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:01:30.836509 2077724 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:01:30.877575 2077724 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:01:30.886197 2077724 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:01:30.889693 2077724 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:01:30.889788 2077724 kubeadm.go:400] StartCluster: {Name:addons-897172 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-897172 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:01:30.889898 2077724 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1018 12:01:30.889999 2077724 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:01:30.921802 2077724 cri.go:89] found id: ""
	I1018 12:01:30.921871 2077724 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:01:30.932255 2077724 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:01:30.940899 2077724 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:01:30.940964 2077724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:01:30.950860 2077724 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:01:30.950930 2077724 kubeadm.go:157] found existing configuration files:
	
	I1018 12:01:30.951020 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:01:30.959061 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:01:30.959125 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:01:30.966230 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:01:30.973779 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:01:30.973875 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:01:30.980783 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:01:30.988527 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:01:30.988622 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:01:30.995575 2077724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:01:31.004254 2077724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:01:31.004383 2077724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:01:31.012448 2077724 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:01:31.054125 2077724 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:01:31.054425 2077724 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:01:31.077631 2077724 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:01:31.077756 2077724 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:01:31.077817 2077724 kubeadm.go:318] OS: Linux
	I1018 12:01:31.077891 2077724 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:01:31.077980 2077724 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:01:31.078068 2077724 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:01:31.078156 2077724 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:01:31.078235 2077724 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:01:31.078314 2077724 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:01:31.078421 2077724 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:01:31.078536 2077724 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:01:31.078619 2077724 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:01:31.156469 2077724 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:01:31.156645 2077724 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:01:31.156781 2077724 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:01:31.163353 2077724 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:01:31.169708 2077724 out.go:252]   - Generating certificates and keys ...
	I1018 12:01:31.169871 2077724 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:01:31.169978 2077724 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:01:31.525626 2077724 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:01:31.848832 2077724 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:01:32.393819 2077724 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:01:32.923032 2077724 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:01:33.652073 2077724 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:01:33.652225 2077724 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-897172 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:01:34.541290 2077724 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:01:34.541435 2077724 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-897172 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:01:34.931781 2077724 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:01:36.505574 2077724 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:01:37.367371 2077724 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:01:37.367532 2077724 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:01:38.122331 2077724 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:01:39.023171 2077724 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:01:39.920347 2077724 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:01:40.197284 2077724 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:01:40.949570 2077724 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:01:40.950113 2077724 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:01:40.954682 2077724 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:01:40.958145 2077724 out.go:252]   - Booting up control plane ...
	I1018 12:01:40.958256 2077724 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:01:40.958338 2077724 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:01:40.958415 2077724 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:01:40.974831 2077724 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:01:40.974951 2077724 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:01:40.982862 2077724 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:01:40.983156 2077724 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:01:40.983209 2077724 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:01:41.123684 2077724 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:01:41.123825 2077724 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:01:42.625547 2077724 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.50193372s
	I1018 12:01:42.630971 2077724 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:01:42.631391 2077724 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:01:42.632452 2077724 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:01:42.632560 2077724 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:01:46.720675 2077724 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.087650261s
	I1018 12:01:47.176870 2077724 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.544343746s
	I1018 12:01:49.134763 2077724 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501996642s
	I1018 12:01:49.158015 2077724 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:01:49.183587 2077724 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:01:49.199370 2077724 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:01:49.199605 2077724 kubeadm.go:318] [mark-control-plane] Marking the node addons-897172 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:01:49.212601 2077724 kubeadm.go:318] [bootstrap-token] Using token: p4hob0.9e6vf29erhsuavf2
	I1018 12:01:49.215600 2077724 out.go:252]   - Configuring RBAC rules ...
	I1018 12:01:49.215772 2077724 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:01:49.220164 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:01:49.231260 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:01:49.237790 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:01:49.242142 2077724 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:01:49.250360 2077724 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:01:49.541003 2077724 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:01:49.973935 2077724 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:01:50.542978 2077724 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:01:50.544541 2077724 kubeadm.go:318] 
	I1018 12:01:50.544618 2077724 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:01:50.544625 2077724 kubeadm.go:318] 
	I1018 12:01:50.544706 2077724 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:01:50.544712 2077724 kubeadm.go:318] 
	I1018 12:01:50.544738 2077724 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:01:50.544799 2077724 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:01:50.544852 2077724 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:01:50.544856 2077724 kubeadm.go:318] 
	I1018 12:01:50.544913 2077724 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:01:50.544917 2077724 kubeadm.go:318] 
	I1018 12:01:50.544967 2077724 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:01:50.544971 2077724 kubeadm.go:318] 
	I1018 12:01:50.545026 2077724 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:01:50.545104 2077724 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:01:50.545176 2077724 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:01:50.545202 2077724 kubeadm.go:318] 
	I1018 12:01:50.545291 2077724 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:01:50.545371 2077724 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:01:50.545375 2077724 kubeadm.go:318] 
	I1018 12:01:50.545463 2077724 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token p4hob0.9e6vf29erhsuavf2 \
	I1018 12:01:50.545571 2077724 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6ad86b1276159d70ddf959ffd2834e19bb4d7329ebde5370ec0afcbde1bef9f4 \
	I1018 12:01:50.545592 2077724 kubeadm.go:318] 	--control-plane 
	I1018 12:01:50.545596 2077724 kubeadm.go:318] 
	I1018 12:01:50.545685 2077724 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:01:50.545689 2077724 kubeadm.go:318] 
	I1018 12:01:50.546055 2077724 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token p4hob0.9e6vf29erhsuavf2 \
	I1018 12:01:50.546175 2077724 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6ad86b1276159d70ddf959ffd2834e19bb4d7329ebde5370ec0afcbde1bef9f4 
	I1018 12:01:50.550384 2077724 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:01:50.550639 2077724 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:01:50.550748 2077724 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:01:50.550769 2077724 cni.go:84] Creating CNI manager for ""
	I1018 12:01:50.550777 2077724 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:01:50.553944 2077724 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:01:50.556856 2077724 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:01:50.560949 2077724 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:01:50.560970 2077724 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:01:50.574166 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:01:50.877469 2077724 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:01:50.877597 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:50.877658 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-897172 minikube.k8s.io/updated_at=2025_10_18T12_01_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=addons-897172 minikube.k8s.io/primary=true
	I1018 12:01:51.034941 2077724 ops.go:34] apiserver oom_adj: -16
	I1018 12:01:51.035045 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:51.535966 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:52.036086 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:52.535702 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:53.035765 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:53.535332 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:54.035261 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:54.535816 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:55.036057 2077724 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:01:55.140647 2077724 kubeadm.go:1113] duration metric: took 4.263094222s to wait for elevateKubeSystemPrivileges
	I1018 12:01:55.140675 2077724 kubeadm.go:402] duration metric: took 24.250891332s to StartCluster
	I1018 12:01:55.140692 2077724 settings.go:142] acquiring lock: {Name:mkfe09c4f932c229739f9b782a8232962c7d94cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:55.140808 2077724 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:01:55.141214 2077724 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/kubeconfig: {Name:mkb34a50149724994ca0c2a0fd8679c156671366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:01:55.141423 2077724 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1018 12:01:55.141580 2077724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:01:55.141828 2077724 config.go:182] Loaded profile config "addons-897172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:01:55.141862 2077724 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 12:01:55.141992 2077724 addons.go:69] Setting yakd=true in profile "addons-897172"
	I1018 12:01:55.142018 2077724 addons.go:238] Setting addon yakd=true in "addons-897172"
	I1018 12:01:55.142049 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.142573 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.142832 2077724 addons.go:69] Setting inspektor-gadget=true in profile "addons-897172"
	I1018 12:01:55.142869 2077724 addons.go:238] Setting addon inspektor-gadget=true in "addons-897172"
	I1018 12:01:55.142897 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.143013 2077724 addons.go:69] Setting metrics-server=true in profile "addons-897172"
	I1018 12:01:55.143038 2077724 addons.go:238] Setting addon metrics-server=true in "addons-897172"
	I1018 12:01:55.143063 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.143304 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.143502 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.143890 2077724 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-897172"
	I1018 12:01:55.143911 2077724 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-897172"
	I1018 12:01:55.143941 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.144409 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.148350 2077724 addons.go:69] Setting registry=true in profile "addons-897172"
	I1018 12:01:55.148444 2077724 addons.go:238] Setting addon registry=true in "addons-897172"
	I1018 12:01:55.148496 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.149036 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.156763 2077724 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-897172"
	I1018 12:01:55.156798 2077724 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-897172"
	I1018 12:01:55.156839 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.157304 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.167922 2077724 addons.go:69] Setting registry-creds=true in profile "addons-897172"
	I1018 12:01:55.168012 2077724 addons.go:238] Setting addon registry-creds=true in "addons-897172"
	I1018 12:01:55.168080 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.168622 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.176046 2077724 addons.go:69] Setting cloud-spanner=true in profile "addons-897172"
	I1018 12:01:55.176085 2077724 addons.go:238] Setting addon cloud-spanner=true in "addons-897172"
	I1018 12:01:55.176131 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.176619 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.179938 2077724 addons.go:69] Setting storage-provisioner=true in profile "addons-897172"
	I1018 12:01:55.179980 2077724 addons.go:238] Setting addon storage-provisioner=true in "addons-897172"
	I1018 12:01:55.180027 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.180501 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.199055 2077724 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-897172"
	I1018 12:01:55.199136 2077724 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-897172"
	I1018 12:01:55.199169 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.199675 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.199884 2077724 out.go:179] * Verifying Kubernetes components...
	I1018 12:01:55.199057 2077724 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-897172"
	I1018 12:01:55.214980 2077724 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-897172"
	I1018 12:01:55.215338 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.199068 2077724 addons.go:69] Setting volcano=true in profile "addons-897172"
	I1018 12:01:55.232165 2077724 addons.go:238] Setting addon volcano=true in "addons-897172"
	I1018 12:01:55.232208 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.232926 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.274771 2077724 addons.go:69] Setting default-storageclass=true in profile "addons-897172"
	I1018 12:01:55.274810 2077724 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-897172"
	I1018 12:01:55.275232 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.297279 2077724 addons.go:69] Setting gcp-auth=true in profile "addons-897172"
	I1018 12:01:55.297364 2077724 mustload.go:65] Loading cluster: addons-897172
	I1018 12:01:55.297624 2077724 config.go:182] Loaded profile config "addons-897172": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:01:55.300970 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.327641 2077724 addons.go:69] Setting ingress=true in profile "addons-897172"
	I1018 12:01:55.327678 2077724 addons.go:238] Setting addon ingress=true in "addons-897172"
	I1018 12:01:55.327737 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.328548 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.353983 2077724 addons.go:69] Setting ingress-dns=true in profile "addons-897172"
	I1018 12:01:55.354027 2077724 addons.go:238] Setting addon ingress-dns=true in "addons-897172"
	I1018 12:01:55.354134 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.354781 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.365990 2077724 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:01:55.199076 2077724 addons.go:69] Setting volumesnapshots=true in profile "addons-897172"
	I1018 12:01:55.390640 2077724 addons.go:238] Setting addon volumesnapshots=true in "addons-897172"
	I1018 12:01:55.390713 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.391452 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.433171 2077724 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 12:01:55.433346 2077724 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 12:01:55.433519 2077724 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 12:01:55.441196 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 12:01:55.441225 2077724 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 12:01:55.441290 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.460612 2077724 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 12:01:55.439982 2077724 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:01:55.460925 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 12:01:55.460999 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.440213 2077724 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-897172"
	I1018 12:01:55.483741 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.484206 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.486037 2077724 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 12:01:55.490506 2077724 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 12:01:55.493535 2077724 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 12:01:55.493596 2077724 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 12:01:55.493703 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.506237 2077724 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:01:55.506259 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 12:01:55.506331 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.511628 2077724 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 12:01:55.512478 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 12:01:55.512510 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 12:01:55.512601 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.525223 2077724 out.go:179]   - Using image docker.io/volcanosh/vc-controller-manager:v1.13.0
	I1018 12:01:55.528058 2077724 out.go:179]   - Using image docker.io/volcanosh/vc-scheduler:v1.13.0
	I1018 12:01:55.530970 2077724 out.go:179]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.13.0
	I1018 12:01:55.537409 2077724 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:01:55.537437 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (1017570 bytes)
	I1018 12:01:55.537506 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.544753 2077724 addons.go:238] Setting addon default-storageclass=true in "addons-897172"
	I1018 12:01:55.544840 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.545329 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:01:55.569150 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:01:55.569409 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 12:01:55.571780 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 12:01:55.574279 2077724 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 12:01:55.574378 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.581673 2077724 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 12:01:55.582002 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 12:01:55.601574 2077724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:01:55.617469 2077724 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 12:01:55.644075 2077724 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 12:01:55.644120 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 12:01:55.644203 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.617684 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:01:55.672933 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 12:01:55.673045 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.674079 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 12:01:55.687206 2077724 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 12:01:55.672749 2077724 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:01:55.672869 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 12:01:55.689267 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.690050 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.693283 2077724 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:01:55.693428 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:01:55.693491 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.701252 2077724 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:01:55.701277 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 12:01:55.704890 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.693292 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 12:01:55.693297 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:55.720303 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.721359 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.722009 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.722634 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 12:01:55.724068 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 12:01:55.724091 2077724 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 12:01:55.724149 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.728030 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:01:55.732920 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 12:01:55.734019 2077724 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:01:55.734062 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 12:01:55.734157 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.744423 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 12:01:55.752520 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 12:01:55.755494 2077724 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 12:01:55.762370 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 12:01:55.762405 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 12:01:55.762472 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.776480 2077724 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 12:01:55.780157 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.789164 2077724 out.go:179]   - Using image docker.io/busybox:stable
	I1018 12:01:55.792965 2077724 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:01:55.792988 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 12:01:55.793056 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.802265 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.807122 2077724 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:01:55.807142 2077724 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:01:55.807198 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:01:55.883711 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.884028 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.903640 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.912332 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.918683 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.924310 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	W1018 12:01:55.932197 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:55.932242 2077724 retry.go:31] will retry after 279.479591ms: ssh: handshake failed: EOF
	W1018 12:01:55.932427 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:55.932444 2077724 retry.go:31] will retry after 258.849701ms: ssh: handshake failed: EOF
	I1018 12:01:55.933803 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.952059 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:01:55.953960 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	W1018 12:01:55.955074 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:55.955097 2077724 retry.go:31] will retry after 338.346835ms: ssh: handshake failed: EOF
	I1018 12:01:55.956960 2077724 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1018 12:01:56.194244 2077724 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 12:01:56.194272 2077724 retry.go:31] will retry after 492.979292ms: ssh: handshake failed: EOF
	I1018 12:01:56.599571 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 12:01:56.599597 2077724 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 12:01:56.636926 2077724 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 12:01:56.636951 2077724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 12:01:56.690746 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 12:01:56.691536 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 12:01:56.693476 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 12:01:56.694965 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 12:01:56.694984 2077724 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 12:01:56.704701 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 12:01:56.704724 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 12:01:56.720278 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 12:01:56.785529 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 12:01:56.807308 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1018 12:01:56.811611 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 12:01:56.811680 2077724 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 12:01:56.831119 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 12:01:56.831194 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 12:01:56.833344 2077724 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.23168759s)
	I1018 12:01:56.833537 2077724 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:01:56.834246 2077724 node_ready.go:35] waiting up to 6m0s for node "addons-897172" to be "Ready" ...
	I1018 12:01:56.845232 2077724 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:56.845301 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 12:01:56.866974 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:01:56.923573 2077724 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 12:01:56.923668 2077724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 12:01:56.930292 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 12:01:56.992626 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 12:01:56.992706 2077724 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 12:01:56.997386 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 12:01:56.997463 2077724 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 12:01:57.042848 2077724 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:01:57.042926 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 12:01:57.082695 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:01:57.099172 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 12:01:57.099247 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 12:01:57.153023 2077724 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:01:57.153043 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 12:01:57.190917 2077724 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 12:01:57.190995 2077724 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 12:01:57.200014 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 12:01:57.228361 2077724 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:01:57.228435 2077724 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 12:01:57.283526 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 12:01:57.312033 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 12:01:57.312109 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 12:01:57.340058 2077724 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-897172" context rescaled to 1 replicas
	I1018 12:01:57.357577 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 12:01:57.376381 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 12:01:57.379146 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 12:01:57.379234 2077724 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 12:01:57.502013 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 12:01:57.502104 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 12:01:57.505617 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:01:57.544304 2077724 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:57.544380 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 12:01:57.714094 2077724 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 12:01:57.714170 2077724 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 12:01:57.791967 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:01:57.991772 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 12:01:57.991799 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 12:01:58.130707 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 12:01:58.130731 2077724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 12:01:58.349625 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 12:01:58.349657 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 12:01:58.526352 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 12:01:58.526376 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 12:01:58.785750 2077724 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:01:58.785771 2077724 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1018 12:01:58.843652 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:01:59.047504 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 12:01:59.856212 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.165382232s)
	I1018 12:01:59.856472 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.1649146s)
	I1018 12:02:00.040994 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.34747908s)
	I1018 12:02:00.041163 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.320842941s)
	I1018 12:02:00.041247 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.255640201s)
	W1018 12:02:00.850342 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	W1018 12:02:02.905260 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:03.185091 2077724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 12:02:03.185178 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:02:03.214205 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:02:03.352867 2077724 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 12:02:03.368311 2077724 addons.go:238] Setting addon gcp-auth=true in "addons-897172"
	I1018 12:02:03.368367 2077724 host.go:66] Checking if "addons-897172" exists ...
	I1018 12:02:03.368819 2077724 cli_runner.go:164] Run: docker container inspect addons-897172 --format={{.State.Status}}
	I1018 12:02:03.398587 2077724 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 12:02:03.398651 2077724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-897172
	I1018 12:02:03.427802 2077724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35694 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/addons-897172/id_rsa Username:docker}
	I1018 12:02:03.984133 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (7.176742656s)
	I1018 12:02:03.984189 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.117145885s)
	I1018 12:02:03.984368 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.054004146s)
	I1018 12:02:03.984449 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.901691865s)
	W1018 12:02:03.984470 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:03.984490 2077724 retry.go:31] will retry after 367.487904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:03.984579 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.78449376s)
	I1018 12:02:03.984593 2077724 addons.go:479] Verifying addon ingress=true in "addons-897172"
	I1018 12:02:03.984786 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.70118689s)
	I1018 12:02:03.984879 2077724 addons.go:479] Verifying addon registry=true in "addons-897172"
	I1018 12:02:03.985050 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.627392855s)
	I1018 12:02:03.985313 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.608855749s)
	I1018 12:02:03.985863 2077724 addons.go:479] Verifying addon metrics-server=true in "addons-897172"
	I1018 12:02:03.985372 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.479682153s)
	I1018 12:02:03.985483 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.193434489s)
	W1018 12:02:03.985916 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:02:03.985933 2077724 retry.go:31] will retry after 125.590522ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 12:02:03.985650 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.938067626s)
	I1018 12:02:03.985964 2077724 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-897172"
	I1018 12:02:03.989146 2077724 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-897172 service yakd-dashboard -n yakd-dashboard
	
	I1018 12:02:03.989181 2077724 out.go:179] * Verifying registry addon...
	I1018 12:02:03.989194 2077724 out.go:179] * Verifying ingress addon...
	I1018 12:02:03.993133 2077724 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 12:02:03.993241 2077724 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 12:02:03.995937 2077724 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 12:02:03.996736 2077724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 12:02:03.998618 2077724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 12:02:04.000919 2077724 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 12:02:04.004004 2077724 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 12:02:04.004047 2077724 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 12:02:04.038876 2077724 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 12:02:04.038899 2077724 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 12:02:04.074012 2077724 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:02:04.074034 2077724 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 12:02:04.099669 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 12:02:04.112594 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 12:02:04.130119 2077724 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:02:04.130440 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:04.130321 2077724 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:02:04.130512 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:04.130418 2077724 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 12:02:04.130569 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.353037 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:04.520764 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:04.521047 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:04.521091 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.005904 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:05.006357 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.009772 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.211469 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111762228s)
	I1018 12:02:05.217148 2077724 addons.go:479] Verifying addon gcp-auth=true in "addons-897172"
	I1018 12:02:05.220256 2077724 out.go:179] * Verifying gcp-auth addon...
	I1018 12:02:05.224050 2077724 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 12:02:05.227512 2077724 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 12:02:05.227587 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:05.338461 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:05.500929 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:05.501767 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:05.502241 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:05.533106 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.420472367s)
	I1018 12:02:05.727930 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:05.747389 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.39426382s)
	W1018 12:02:05.747424 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:05.747442 2077724 retry.go:31] will retry after 231.418033ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:05.979592 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:06.002289 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:06.003430 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.005110 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.227201 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:06.500574 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:06.500786 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:06.502926 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:06.727247 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:06.798356 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:06.798386 2077724 retry.go:31] will retry after 495.929746ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:07.000180 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.000333 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:07.003712 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.227952 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:07.295034 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:07.499317 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:07.500616 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:07.501555 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:07.728023 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:07.838785 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:08.002192 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:08.005381 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.005908 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:02:08.131275 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:08.131305 2077724 retry.go:31] will retry after 603.765616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:08.227574 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.499166 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:08.500546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:08.501041 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:08.726750 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:08.736066 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:09.002062 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.002269 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:09.004228 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.227657 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:09.511224 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:09.511396 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:09.512265 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 12:02:09.616046 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:09.616077 2077724 retry.go:31] will retry after 664.404477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:09.726845 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.003446 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:10.003803 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:10.004781 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.227604 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:10.280977 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 12:02:10.338623 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:10.503834 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:10.504381 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:10.504705 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:10.727145 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.003525 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:11.003808 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:11.003891 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 12:02:11.101255 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:11.101293 2077724 retry.go:31] will retry after 2.822526788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:11.227105 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.499476 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:11.500898 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:11.502072 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:11.726863 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:11.999988 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:12.000680 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:12.003876 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:12.227623 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:12.499576 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:12.499781 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:12.501735 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:12.727601 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:12.837581 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:13.000332 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:13.000556 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:13.006040 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:13.226868 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.501079 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:13.501488 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:13.503157 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:13.727059 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:13.924147 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:14.001326 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:14.001826 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:14.006486 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:14.227218 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:14.501005 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:14.502446 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:14.504112 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:14.727285 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:14.728882 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:14.728907 2077724 retry.go:31] will retry after 3.431148696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:02:14.837889 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:14.999945 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:15.005830 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:15.006327 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:15.227792 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.498905 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:15.501272 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:15.501379 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:15.727358 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:15.998899 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:16.005611 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:16.005741 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:16.227666 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:16.500668 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:16.501270 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:16.501930 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:16.727019 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:17.002094 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:17.002516 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:17.003330 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:17.227682 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:17.337600 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:17.500550 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:17.501192 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:17.503022 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:17.727350 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:18.000412 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:18.005904 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:18.006485 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:18.160859 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:18.227263 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:18.502249 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:18.504090 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:18.505005 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:18.727275 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:18.960260 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:18.960291 2077724 retry.go:31] will retry after 3.045277304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:19.003924 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:19.004085 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:19.004527 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:19.227560 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:19.499436 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:19.500523 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:19.501276 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:19.727100 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:19.838063 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:19.999564 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:20.001160 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:20.002168 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:20.226923 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:20.501229 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:20.501584 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:20.502374 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:20.727380 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:21.007955 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:21.008079 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:21.008198 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:21.227036 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:21.499921 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:21.500094 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:21.502663 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:21.727527 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:21.999118 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:22.000578 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:22.005242 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:22.005869 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:22.227940 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:22.338340 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:22.501129 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:22.503052 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:22.503363 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:22.727150 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:22.824406 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:22.824436 2077724 retry.go:31] will retry after 3.710811743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:23.000163 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:23.000492 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:23.005359 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:23.227538 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:23.500060 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:23.500106 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:23.502446 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:23.727534 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:24.005172 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:24.005329 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:24.005694 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:24.249868 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:24.341216 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:24.500141 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:24.500390 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:24.501980 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:24.727036 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:25.003058 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:25.003561 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:25.008457 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:25.227789 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:25.499349 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:25.500869 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:25.501179 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:25.727814 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:26.001443 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:26.002202 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:26.011972 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:26.227084 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:26.500968 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:26.501534 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:26.501595 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:26.535855 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:26.726940 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:26.838457 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:27.004524 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:27.004702 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:27.005075 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:27.227979 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:27.324866 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:27.324896 2077724 retry.go:31] will retry after 10.387791324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:27.500850 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:27.501348 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:27.502267 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:27.727322 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:27.999390 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:28.000154 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:28.004067 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:28.227319 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:28.500346 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:28.500663 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:28.502257 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:28.727894 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:29.000636 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:29.001703 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:29.002757 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:29.227655 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:29.337946 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:29.499324 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:29.500414 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:29.501045 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:29.726780 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:29.999907 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:30.000082 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:30.003970 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:30.226949 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:30.501384 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:30.501454 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:30.502068 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:30.726781 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:30.999308 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:30.999884 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:31.001949 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:31.226732 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:31.500550 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:31.501185 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:31.501705 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:31.727517 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:31.837183 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:31.999247 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:32.000494 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:32.003303 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:32.227085 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:32.499106 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:32.501742 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:32.502124 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:32.726912 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:32.999758 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:32.999921 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:33.004449 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:33.227553 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:33.500346 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:33.500481 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:33.501416 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:33.727469 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:33.837227 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:33.999264 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:33.999481 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:34.002718 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:34.227625 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:34.498893 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:34.500940 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:34.501095 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:34.726955 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:35.000693 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:35.000865 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:35.003750 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:35.227522 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:35.500290 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:35.500377 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:35.502337 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:35.727223 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 12:02:35.837891 2077724 node_ready.go:57] node "addons-897172" has "Ready":"False" status (will retry)
	I1018 12:02:35.998714 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:36.007597 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:36.007727 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:36.227773 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:36.500166 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:36.501691 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:36.502703 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:36.742420 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:36.840417 2077724 node_ready.go:49] node "addons-897172" is "Ready"
	I1018 12:02:36.840443 2077724 node_ready.go:38] duration metric: took 40.006143267s for node "addons-897172" to be "Ready" ...
	I1018 12:02:36.840458 2077724 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:02:36.840524 2077724 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:02:36.878264 2077724 api_server.go:72] duration metric: took 41.736800584s to wait for apiserver process to appear ...
	I1018 12:02:36.878291 2077724 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:02:36.878314 2077724 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:02:36.894278 2077724 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:02:36.896195 2077724 api_server.go:141] control plane version: v1.34.1
	I1018 12:02:36.896224 2077724 api_server.go:131] duration metric: took 17.924905ms to wait for apiserver health ...
	I1018 12:02:36.896234 2077724 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:02:36.906671 2077724 system_pods.go:59] 19 kube-system pods found
	I1018 12:02:36.906711 2077724 system_pods.go:61] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:36.906718 2077724 system_pods.go:61] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending
	I1018 12:02:36.906725 2077724 system_pods.go:61] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending
	I1018 12:02:36.906731 2077724 system_pods.go:61] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending
	I1018 12:02:36.906735 2077724 system_pods.go:61] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:36.906740 2077724 system_pods.go:61] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:36.906745 2077724 system_pods.go:61] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:36.906756 2077724 system_pods.go:61] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:36.906761 2077724 system_pods.go:61] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending
	I1018 12:02:36.906768 2077724 system_pods.go:61] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:36.906772 2077724 system_pods.go:61] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:36.906781 2077724 system_pods.go:61] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending
	I1018 12:02:36.906785 2077724 system_pods.go:61] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending
	I1018 12:02:36.906798 2077724 system_pods.go:61] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending
	I1018 12:02:36.906805 2077724 system_pods.go:61] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:36.906811 2077724 system_pods.go:61] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending
	I1018 12:02:36.906823 2077724 system_pods.go:61] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending
	I1018 12:02:36.906830 2077724 system_pods.go:61] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:36.906836 2077724 system_pods.go:61] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:36.906846 2077724 system_pods.go:74] duration metric: took 10.605992ms to wait for pod list to return data ...
	I1018 12:02:36.906858 2077724 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:02:36.945271 2077724 default_sa.go:45] found service account: "default"
	I1018 12:02:36.945299 2077724 default_sa.go:55] duration metric: took 38.433528ms for default service account to be created ...
	I1018 12:02:36.945309 2077724 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:02:37.008400 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:37.008441 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:37.008448 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending
	I1018 12:02:37.008455 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending
	I1018 12:02:37.008460 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending
	I1018 12:02:37.008464 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:37.008470 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:37.008475 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:37.008484 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:37.008491 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending
	I1018 12:02:37.008494 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:37.008501 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:37.008509 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending
	I1018 12:02:37.008514 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending
	I1018 12:02:37.008518 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending
	I1018 12:02:37.008537 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:37.008542 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending
	I1018 12:02:37.008550 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.008565 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.008571 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:37.008589 2077724 retry.go:31] will retry after 303.021535ms: missing components: kube-dns
	I1018 12:02:37.009096 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:37.009194 2077724 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 12:02:37.009208 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:37.010349 2077724 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 12:02:37.010374 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:37.229536 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:37.319989 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:37.320029 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:37.320036 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending
	I1018 12:02:37.320043 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending
	I1018 12:02:37.320047 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending
	I1018 12:02:37.320051 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:37.320056 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:37.320066 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:37.320071 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:37.320075 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending
	I1018 12:02:37.320084 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:37.320089 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:37.320099 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:37.320104 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending
	I1018 12:02:37.320118 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:37.320124 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:37.320137 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:37.320144 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.320151 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.320162 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:37.320178 2077724 retry.go:31] will retry after 288.724433ms: missing components: kube-dns
	I1018 12:02:37.501350 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:37.501704 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:37.502108 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:37.631984 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:37.632024 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:37.632034 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:37.632043 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:37.632050 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:37.632055 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:37.632060 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:37.632069 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:37.632074 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:37.632086 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:37.632091 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:37.632096 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:37.632109 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:37.632116 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:37.632127 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:37.632133 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:37.632140 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:37.632146 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.632155 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:37.632165 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:37.632183 2077724 retry.go:31] will retry after 378.474191ms: missing components: kube-dns
	I1018 12:02:37.713450 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:37.727658 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:38.003258 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:38.003440 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:38.003601 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:38.015012 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:38.015053 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:38.015063 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:38.015072 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:38.015082 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:38.015090 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:38.015096 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:38.015112 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:38.015118 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:38.015126 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:38.015135 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:38.015140 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:38.015145 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:38.015153 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:38.015181 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:38.015190 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:38.015201 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:38.015208 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.015219 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.015226 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:02:38.015246 2077724 retry.go:31] will retry after 499.684215ms: missing components: kube-dns
	I1018 12:02:38.227339 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:38.501450 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:38.501592 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:38.502776 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:38.519711 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:38.519752 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 12:02:38.519762 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:38.519770 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:38.519777 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:38.519785 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:38.519796 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:38.519801 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:38.519812 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:38.519818 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:38.519830 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:38.519853 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:38.519862 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:38.519868 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:38.519878 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:38.519885 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:38.519895 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:38.519903 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.519916 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:38.519920 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Running
	I1018 12:02:38.519935 2077724 retry.go:31] will retry after 619.284345ms: missing components: kube-dns
	I1018 12:02:38.727083 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:39.003613 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:39.003781 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:39.004504 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:39.146577 2077724 system_pods.go:86] 19 kube-system pods found
	I1018 12:02:39.146613 2077724 system_pods.go:89] "coredns-66bc5c9577-72vfc" [27c050a1-1b42-4f69-a37c-3864d231020f] Running
	I1018 12:02:39.146635 2077724 system_pods.go:89] "csi-hostpath-attacher-0" [a34aa7bb-8dee-4ec2-943a-41a6009b98f3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 12:02:39.146643 2077724 system_pods.go:89] "csi-hostpath-resizer-0" [4182e5e9-dc78-47da-981e-07c5080e1bf3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 12:02:39.146656 2077724 system_pods.go:89] "csi-hostpathplugin-kkctl" [b9d6daf8-3cf1-4c40-81a0-90f3c347f16b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 12:02:39.146671 2077724 system_pods.go:89] "etcd-addons-897172" [1e9ee371-bcf9-477e-b0d7-647b692c0bdf] Running
	I1018 12:02:39.146676 2077724 system_pods.go:89] "kindnet-zx4jd" [f6966169-6361-46fa-b60d-7cb67c785953] Running
	I1018 12:02:39.146681 2077724 system_pods.go:89] "kube-apiserver-addons-897172" [573e8cff-85cc-4011-9bae-ffb4383801a9] Running
	I1018 12:02:39.146690 2077724 system_pods.go:89] "kube-controller-manager-addons-897172" [f5e44727-b557-4306-823a-90703f85917a] Running
	I1018 12:02:39.146697 2077724 system_pods.go:89] "kube-ingress-dns-minikube" [ea2ea820-0b46-46a5-8a36-d2e04ef74663] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 12:02:39.146707 2077724 system_pods.go:89] "kube-proxy-5wvw6" [9337e807-7e99-4b42-b629-1c3b5cb70d8a] Running
	I1018 12:02:39.146712 2077724 system_pods.go:89] "kube-scheduler-addons-897172" [53d504e2-c117-4c17-834b-e396bec7496c] Running
	I1018 12:02:39.146717 2077724 system_pods.go:89] "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 12:02:39.146734 2077724 system_pods.go:89] "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 12:02:39.146740 2077724 system_pods.go:89] "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 12:02:39.146748 2077724 system_pods.go:89] "registry-creds-764b6fb674-b6zx6" [c32793b2-06b3-4b42-9b78-938c01bcfd38] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 12:02:39.146755 2077724 system_pods.go:89] "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 12:02:39.146764 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-2np2z" [f53631ad-f24e-4ae7-84a9-2894cd32d094] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:39.146786 2077724 system_pods.go:89] "snapshot-controller-7d9fbc56b8-kx2sh" [83ac2397-32ad-4982-bec3-81ea5d47c49a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 12:02:39.146799 2077724 system_pods.go:89] "storage-provisioner" [93c0a2f7-1aab-485f-9f75-20eb6c39f998] Running
	I1018 12:02:39.146809 2077724 system_pods.go:126] duration metric: took 2.201493858s to wait for k8s-apps to be running ...
	I1018 12:02:39.146821 2077724 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:02:39.146879 2077724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:02:39.227308 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:39.504835 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:39.504929 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:39.506327 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:39.511518 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.798025909s)
	W1018 12:02:39.511558 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:39.511575 2077724 retry.go:31] will retry after 18.950534275s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:39.511614 2077724 system_svc.go:56] duration metric: took 364.788564ms WaitForService to wait for kubelet
	I1018 12:02:39.511630 2077724 kubeadm.go:586] duration metric: took 44.370171471s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:02:39.511647 2077724 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:02:39.514514 2077724 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:02:39.514548 2077724 node_conditions.go:123] node cpu capacity is 2
	I1018 12:02:39.514560 2077724 node_conditions.go:105] duration metric: took 2.902686ms to run NodePressure ...
	I1018 12:02:39.514571 2077724 start.go:241] waiting for startup goroutines ...
	I1018 12:02:39.727772 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:40.004549 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:40.004656 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:40.005575 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:40.229608 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:40.503647 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:40.504154 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:40.504462 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:40.728307 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:41.002301 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:41.002609 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:41.005098 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:41.227219 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:41.502619 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:41.502890 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:41.502993 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:41.727347 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:42.007960 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:42.008259 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:42.008390 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:42.229016 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:42.500691 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:42.502766 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:42.504148 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:42.727862 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:43.001582 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:43.002088 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:43.004575 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:43.227069 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:43.504190 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:43.504389 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:43.504520 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:43.727906 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:44.002425 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:44.002801 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:44.003692 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:44.227724 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:44.502469 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:44.502711 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:44.503042 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:44.727307 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:45.009271 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:45.009546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:45.009665 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:45.255478 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:45.510914 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:45.511592 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:45.511952 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:45.727763 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:46.019229 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:46.019705 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:46.020121 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:46.227928 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:46.502839 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:46.504039 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:46.505970 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:46.728800 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:47.016437 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:47.016614 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:47.016786 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:47.228633 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:47.501692 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:47.502004 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:47.502929 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:47.726824 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:48.008737 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:48.008830 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:48.011706 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:48.227625 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:48.502341 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:48.502612 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:48.502979 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:48.727406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:49.005101 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:49.005281 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:49.005948 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:49.227945 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:49.502797 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:49.503179 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:49.503753 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:49.727698 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:50.005959 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:50.008611 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:50.010006 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:50.227406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:50.503206 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:50.503621 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:50.503741 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:50.727591 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:51.005124 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:51.006185 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:51.006980 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:51.227564 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:51.502857 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:51.503347 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:51.503712 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:51.728292 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:52.002458 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:52.002741 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:52.005117 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:52.227183 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:52.502138 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:52.502288 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:52.504034 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:52.727424 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:53.002195 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:53.002469 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:53.002799 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:53.227770 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:53.502607 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:53.502779 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:53.503126 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:53.727298 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:54.001504 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:54.001736 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:54.003819 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:54.227097 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:54.500992 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:54.501240 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:54.502879 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:54.727994 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:54.999639 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:55.004992 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:55.005253 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:55.235217 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:55.502543 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:55.502999 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:55.503315 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:55.727523 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:56.001762 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:56.002161 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:56.005447 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:56.227997 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:56.500005 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:56.501642 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:56.501823 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:56.727731 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:57.004094 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:57.004311 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:57.004386 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:57.227948 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:57.499162 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:57.501230 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:57.501373 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:57.728426 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:58.005796 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:58.006755 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:58.008079 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:58.228205 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:58.462310 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:02:58.503713 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:58.503800 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:58.504505 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:58.727458 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:59.005887 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:59.006304 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:59.006726 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:59.228181 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:02:59.503383 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:02:59.503939 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:02:59.505355 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:02:59.585804 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.123460058s)
	W1018 12:02:59.585900 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:59.585963 2077724 retry.go:31] will retry after 26.550505718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 12:02:59.728227 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:00.002395 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:00.002957 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:00.030451 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:00.266341 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:00.503206 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:00.504132 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:00.519382 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:00.727824 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:01.001718 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:01.002197 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:01.006422 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:01.228016 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:01.503357 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:01.505002 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:01.506196 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:01.727063 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:02.003772 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:02.004370 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:02.007484 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:02.228031 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:02.503799 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:02.504162 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:02.504231 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:02.727150 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:03.000278 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:03.003609 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:03.003623 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:03.227542 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:03.501852 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:03.502056 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:03.503375 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:03.728495 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:04.004220 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:04.005349 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:04.009220 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:04.229420 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:04.515378 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:04.516285 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:04.516414 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:04.727718 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:05.007245 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:05.007298 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:05.008724 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:05.230549 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:05.512261 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:05.512516 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:05.512963 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:05.727567 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:06.000401 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:06.001885 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:06.004015 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:06.226775 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:06.502287 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:06.503039 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:06.503971 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:06.727826 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:07.001841 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:07.002743 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:07.004936 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:07.227901 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:07.500888 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:07.501008 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:07.503391 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:07.727370 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:08.000708 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:08.001282 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:08.005672 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:08.228118 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:08.503072 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:08.503349 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:08.503515 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:08.727984 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:09.004766 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:09.004868 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:09.005664 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:09.227711 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:09.499542 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:09.503403 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:09.503728 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:09.728160 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:10.002553 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:10.005485 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:10.008937 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:10.227774 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:10.504831 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:10.505425 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:10.505585 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:10.728164 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:11.003271 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:11.004554 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:11.005767 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:11.228135 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:11.505201 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:11.505406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:11.505918 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:11.727447 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:12.007360 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:12.007931 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:12.008431 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:12.228144 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:12.500781 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:12.500916 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:12.501542 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:12.727774 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:12.999786 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:13.001442 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:13.005162 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:13.227730 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:13.500170 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:13.500421 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:13.502221 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:13.727605 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:14.005252 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:14.005848 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:14.006956 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:14.226728 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:14.501064 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:14.502593 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:14.502788 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:14.728228 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:15.010416 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:15.011700 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:15.011735 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:15.230765 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:15.501403 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:15.502306 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:15.503251 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:15.727033 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:16.001677 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:16.005509 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:16.006397 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:16.228449 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:16.503435 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:16.503587 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:16.504115 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:16.727214 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:17.003277 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:17.003776 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:17.006771 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:17.229143 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:17.501546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:17.501737 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:17.503759 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:17.728852 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:18.003620 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:18.004267 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:18.006697 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:18.227637 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:18.500739 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:18.502926 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:18.503265 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:18.727562 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:19.008585 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:19.008804 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:19.008872 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:19.228546 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:19.499220 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:19.500568 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:19.501361 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:19.727169 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:20.079743 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:20.080280 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:20.080797 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:20.228036 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:20.500503 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:20.501312 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:20.504463 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:20.727707 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:21.037881 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:21.038019 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:21.038278 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:21.226976 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:21.499147 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:21.500613 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:21.502037 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:21.727211 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:22.006446 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:22.007168 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:22.007932 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:22.227932 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:22.499356 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:22.501516 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:22.501697 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:22.728083 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:23.004233 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:23.006471 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:23.006669 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:23.228237 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:23.502927 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:23.503281 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:23.503358 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:23.727592 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:23.999898 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 12:03:24.000687 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:24.005469 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:24.232525 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:24.501488 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:24.501852 2077724 kapi.go:107] duration metric: took 1m20.50511517s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 12:03:24.504922 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:24.727573 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:24.999720 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:25.003128 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:25.227715 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:25.499983 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:25.504283 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:25.726627 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:25.999879 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:26.003088 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:26.137354 2077724 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 12:03:26.227338 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:26.501375 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:26.512655 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:26.729027 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:27.000474 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:27.004984 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:27.227487 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:27.301074 2077724 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.163678197s)
	W1018 12:03:27.301125 2077724 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 12:03:27.301235 2077724 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 12:03:27.499118 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:27.501287 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:27.727788 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:28.004575 2077724 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 12:03:28.008417 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:28.229655 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:28.501242 2077724 kapi.go:107] duration metric: took 1m24.505309751s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 12:03:28.509668 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:28.730516 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:29.003211 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:29.227992 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:29.502467 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:29.727668 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:30.004491 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:30.227463 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:30.504076 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:30.728572 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:31.009050 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:31.227148 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:31.503746 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:31.730453 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:32.005805 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:32.231507 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:32.501665 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:32.727634 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:33.004213 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:33.227250 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:33.504887 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:33.738406 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:34.014732 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:34.233936 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:34.502538 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:34.727908 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:35.009631 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:35.228098 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:35.502367 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:35.727581 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:36.004156 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:36.236864 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:36.502160 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:36.727367 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:37.005023 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 12:03:37.227661 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:37.502624 2077724 kapi.go:107] duration metric: took 1m33.504001092s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 12:03:37.727580 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:38.229048 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:38.728075 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:39.227422 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:39.727896 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:40.227615 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:40.728417 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:41.229219 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:41.728138 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:42.248176 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:42.728872 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:43.227576 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:43.727480 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:44.227826 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:44.727314 2077724 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 12:03:45.239629 2077724 kapi.go:107] duration metric: took 1m40.01557586s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 12:03:45.242699 2077724 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-897172 cluster.
	I1018 12:03:45.245514 2077724 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 12:03:45.248471 2077724 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 12:03:45.253114 2077724 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner-rancher, ingress-dns, registry-creds, cloud-spanner, volcano, nvidia-device-plugin, metrics-server, storage-provisioner, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 12:03:45.259554 2077724 addons.go:514] duration metric: took 1m50.117016996s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner-rancher ingress-dns registry-creds cloud-spanner volcano nvidia-device-plugin metrics-server storage-provisioner yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 12:03:45.259652 2077724 start.go:246] waiting for cluster config update ...
	I1018 12:03:45.259698 2077724 start.go:255] writing updated cluster config ...
	I1018 12:03:45.260141 2077724 ssh_runner.go:195] Run: rm -f paused
	I1018 12:03:45.265506 2077724 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:03:45.338220 2077724 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-72vfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.351269 2077724 pod_ready.go:94] pod "coredns-66bc5c9577-72vfc" is "Ready"
	I1018 12:03:45.351306 2077724 pod_ready.go:86] duration metric: took 13.049932ms for pod "coredns-66bc5c9577-72vfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.358453 2077724 pod_ready.go:83] waiting for pod "etcd-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.371893 2077724 pod_ready.go:94] pod "etcd-addons-897172" is "Ready"
	I1018 12:03:45.371920 2077724 pod_ready.go:86] duration metric: took 13.427326ms for pod "etcd-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.382605 2077724 pod_ready.go:83] waiting for pod "kube-apiserver-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.393669 2077724 pod_ready.go:94] pod "kube-apiserver-addons-897172" is "Ready"
	I1018 12:03:45.393729 2077724 pod_ready.go:86] duration metric: took 11.092744ms for pod "kube-apiserver-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.398420 2077724 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.669316 2077724 pod_ready.go:94] pod "kube-controller-manager-addons-897172" is "Ready"
	I1018 12:03:45.669347 2077724 pod_ready.go:86] duration metric: took 270.88071ms for pod "kube-controller-manager-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:45.869697 2077724 pod_ready.go:83] waiting for pod "kube-proxy-5wvw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.269620 2077724 pod_ready.go:94] pod "kube-proxy-5wvw6" is "Ready"
	I1018 12:03:46.269702 2077724 pod_ready.go:86] duration metric: took 399.975412ms for pod "kube-proxy-5wvw6" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.470278 2077724 pod_ready.go:83] waiting for pod "kube-scheduler-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.869534 2077724 pod_ready.go:94] pod "kube-scheduler-addons-897172" is "Ready"
	I1018 12:03:46.869567 2077724 pod_ready.go:86] duration metric: took 399.256552ms for pod "kube-scheduler-addons-897172" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:03:46.869580 2077724 pod_ready.go:40] duration metric: took 1.604038848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:03:46.942667 2077724 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:03:46.945906 2077724 out.go:179] * Done! kubectl is now configured to use "addons-897172" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5499481ac9b50       1611cd07b61d5       3 minutes ago       Running             busybox                   0                   4a3ddf10376ea       busybox                                     default
	5a018d85eae0c       7b6a2fb1abbc9       4 minutes ago       Running             gadget                    0                   b6af92b83bc4f       gadget-6z8nf                                gadget
	a4c86ede4040f       21bfedf4686d5       4 minutes ago       Running             controller                0                   f9f3903a76c7c       ingress-nginx-controller-675c5ddd98-6zvh8   ingress-nginx
	82ce7453ce9fa       7ce2150c8929b       4 minutes ago       Running             local-path-provisioner    0                   29b9494905e05       local-path-provisioner-648f6765c9-vc9vk     local-path-storage
	efc2d51581a80       9a80c0c8eb61c       5 minutes ago       Exited              patch                     0                   120e97d197871       ingress-nginx-admission-patch-xmghg         ingress-nginx
	4c0137541ea54       34da3fe7b8efb       5 minutes ago       Running             minikube-ingress-dns      0                   8ba068ab69e71       kube-ingress-dns-minikube                   kube-system
	68473577ab424       9a80c0c8eb61c       5 minutes ago       Exited              create                    0                   06cb898a4cb4d       ingress-nginx-admission-create-kx9wc        ingress-nginx
	2d600e8cf22c7       ba04bb24b9575       5 minutes ago       Running             storage-provisioner       0                   27a91c68a6394       storage-provisioner                         kube-system
	f123a37f27029       138784d87c9c5       5 minutes ago       Running             coredns                   0                   1e37461c9699c       coredns-66bc5c9577-72vfc                    kube-system
	0f2a5a2b37744       05baa95f5142d       6 minutes ago       Running             kube-proxy                0                   21f4933f9660e       kube-proxy-5wvw6                            kube-system
	ae55d5b301167       b1a8c6f707935       6 minutes ago       Running             kindnet-cni               0                   d0f4193f79f9d       kindnet-zx4jd                               kube-system
	5f5b20ddb03b7       7eb2c6ff0c5a7       6 minutes ago       Running             kube-controller-manager   0                   d28db8ac80b8b       kube-controller-manager-addons-897172       kube-system
	a4a3de681e4e8       43911e833d64d       6 minutes ago       Running             kube-apiserver            0                   6a479a5201f27       kube-apiserver-addons-897172                kube-system
	7f058fe4c8a27       a1894772a478e       6 minutes ago       Running             etcd                      0                   46436fef2ffc5       etcd-addons-897172                          kube-system
	eed961508f62d       b5f57ec6b9867       6 minutes ago       Running             kube-scheduler            0                   91a66c4579b1f       kube-scheduler-addons-897172                kube-system
	
	
	==> containerd <==
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.344569795Z" level=info msg="TearDown network for sandbox \"000b89aceaf0b3ee02c873133fe31e9a2bde7d6bfe29930fc11e64faf454d1be\" successfully"
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.344608539Z" level=info msg="StopPodSandbox for \"000b89aceaf0b3ee02c873133fe31e9a2bde7d6bfe29930fc11e64faf454d1be\" returns successfully"
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.345149908Z" level=info msg="RemovePodSandbox for \"000b89aceaf0b3ee02c873133fe31e9a2bde7d6bfe29930fc11e64faf454d1be\""
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.345188110Z" level=info msg="Forcibly stopping sandbox \"000b89aceaf0b3ee02c873133fe31e9a2bde7d6bfe29930fc11e64faf454d1be\""
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.353551562Z" level=info msg="TearDown network for sandbox \"000b89aceaf0b3ee02c873133fe31e9a2bde7d6bfe29930fc11e64faf454d1be\" successfully"
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.360031041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"000b89aceaf0b3ee02c873133fe31e9a2bde7d6bfe29930fc11e64faf454d1be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.360246026Z" level=info msg="RemovePodSandbox \"000b89aceaf0b3ee02c873133fe31e9a2bde7d6bfe29930fc11e64faf454d1be\" returns successfully"
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.360760615Z" level=info msg="StopPodSandbox for \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\""
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.368430562Z" level=info msg="TearDown network for sandbox \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\" successfully"
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.368472153Z" level=info msg="StopPodSandbox for \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\" returns successfully"
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.369042822Z" level=info msg="RemovePodSandbox for \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\""
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.369111522Z" level=info msg="Forcibly stopping sandbox \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\""
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.377192659Z" level=info msg="TearDown network for sandbox \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\" successfully"
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.383449653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 18 12:06:51 addons-897172 containerd[754]: time="2025-10-18T12:06:51.383526344Z" level=info msg="RemovePodSandbox \"9b3f0f4274706eb3864f45a409ee9136340683b3c7f6f2042010c6f39db806d8\" returns successfully"
	Oct 18 12:07:30 addons-897172 containerd[754]: time="2025-10-18T12:07:30.916338672Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 18 12:07:30 addons-897172 containerd[754]: time="2025-10-18T12:07:30.918694149Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:07:31 addons-897172 containerd[754]: time="2025-10-18T12:07:31.054775923Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:07:31 addons-897172 containerd[754]: time="2025-10-18T12:07:31.348559411Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:07:31 addons-897172 containerd[754]: time="2025-10-18T12:07:31.348608624Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10967"
	Oct 18 12:08:01 addons-897172 containerd[754]: time="2025-10-18T12:08:01.916649882Z" level=info msg="PullImage \"busybox:stable\""
	Oct 18 12:08:01 addons-897172 containerd[754]: time="2025-10-18T12:08:01.919497294Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:08:02 addons-897172 containerd[754]: time="2025-10-18T12:08:02.063603093Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:08:02 addons-897172 containerd[754]: time="2025-10-18T12:08:02.340558193Z" level=error msg="PullImage \"busybox:stable\" failed" error="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:08:02 addons-897172 containerd[754]: time="2025-10-18T12:08:02.340670781Z" level=info msg="stop pulling image docker.io/library/busybox:stable: active requests=0, bytes read=10979"
	
	
	==> coredns [f123a37f27029ed2d0dd392b04368405c969d126e49fca266067c7e310ee6f94] <==
	[INFO] 10.244.0.19:37177 - 22478 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00173961s
	[INFO] 10.244.0.19:37177 - 49682 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000120243s
	[INFO] 10.244.0.19:37177 - 1763 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000150987s
	[INFO] 10.244.0.19:54520 - 609 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000132296s
	[INFO] 10.244.0.19:54520 - 855 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093011s
	[INFO] 10.244.0.19:55596 - 43458 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107452s
	[INFO] 10.244.0.19:55596 - 43623 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00014814s
	[INFO] 10.244.0.19:34831 - 7198 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115s
	[INFO] 10.244.0.19:34831 - 7618 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000131648s
	[INFO] 10.244.0.19:38423 - 50624 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001058765s
	[INFO] 10.244.0.19:38423 - 50204 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00144571s
	[INFO] 10.244.0.19:42693 - 42496 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000139041s
	[INFO] 10.244.0.19:42693 - 42900 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000225915s
	[INFO] 10.244.0.25:57771 - 30763 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151964s
	[INFO] 10.244.0.25:53955 - 25291 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000153817s
	[INFO] 10.244.0.25:36816 - 301 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156665s
	[INFO] 10.244.0.25:48027 - 62261 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152423s
	[INFO] 10.244.0.25:50850 - 2023 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150348s
	[INFO] 10.244.0.25:33129 - 55957 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134036s
	[INFO] 10.244.0.25:35253 - 51648 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001863587s
	[INFO] 10.244.0.25:41054 - 43197 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001946046s
	[INFO] 10.244.0.25:32813 - 30229 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00142852s
	[INFO] 10.244.0.25:35445 - 2635 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001811879s
	[INFO] 10.244.0.29:50482 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000198017s
	[INFO] 10.244.0.29:48743 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000127635s
	
	
	==> describe nodes <==
	Name:               addons-897172
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-897172
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=addons-897172
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_01_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-897172
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:01:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-897172
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:08:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:05:25 +0000   Sat, 18 Oct 2025 12:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:05:25 +0000   Sat, 18 Oct 2025 12:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:05:25 +0000   Sat, 18 Oct 2025 12:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:05:25 +0000   Sat, 18 Oct 2025 12:02:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-897172
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                4f3b39b2-3519-409f-9958-4d7fb9c61252
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  gadget                      gadget-6z8nf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6zvh8    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m14s
	  kube-system                 coredns-66bc5c9577-72vfc                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m21s
	  kube-system                 etcd-addons-897172                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m26s
	  kube-system                 kindnet-zx4jd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m21s
	  kube-system                 kube-apiserver-addons-897172                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-controller-manager-addons-897172        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-5wvw6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-scheduler-addons-897172                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 registry-creds-764b6fb674-b6zx6              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  local-path-storage          local-path-provisioner-648f6765c9-vc9vk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 6m20s  kube-proxy       
	  Normal   Starting                 6m27s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m27s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  6m26s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m26s  kubelet          Node addons-897172 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m26s  kubelet          Node addons-897172 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m26s  kubelet          Node addons-897172 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m22s  node-controller  Node addons-897172 event: Registered Node addons-897172 in Controller
	  Normal   NodeReady                5m40s  kubelet          Node addons-897172 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 11:37] overlayfs: idmapped layers are currently not supported
	[Oct18 11:38] overlayfs: idmapped layers are currently not supported
	[Oct18 11:40] overlayfs: idmapped layers are currently not supported
	[Oct18 11:42] overlayfs: idmapped layers are currently not supported
	[Oct18 11:43] overlayfs: idmapped layers are currently not supported
	[ +44.292171] overlayfs: idmapped layers are currently not supported
	[  +9.552091] overlayfs: idmapped layers are currently not supported
	[Oct18 11:44] overlayfs: idmapped layers are currently not supported
	[Oct18 11:45] overlayfs: idmapped layers are currently not supported
	[Oct18 11:47] overlayfs: idmapped layers are currently not supported
	[ +55.826989] overlayfs: idmapped layers are currently not supported
	[Oct18 11:48] overlayfs: idmapped layers are currently not supported
	[Oct18 11:49] overlayfs: idmapped layers are currently not supported
	[Oct18 11:50] overlayfs: idmapped layers are currently not supported
	[Oct18 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.885672] overlayfs: idmapped layers are currently not supported
	[ +14.381354] overlayfs: idmapped layers are currently not supported
	[Oct18 11:52] overlayfs: idmapped layers are currently not supported
	[Oct18 11:53] overlayfs: idmapped layers are currently not supported
	[Oct18 11:54] overlayfs: idmapped layers are currently not supported
	[Oct18 11:55] overlayfs: idmapped layers are currently not supported
	[ +48.139503] overlayfs: idmapped layers are currently not supported
	[Oct18 11:56] overlayfs: idmapped layers are currently not supported
	[Oct18 11:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:00] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [7f058fe4c8a27bae83a3121872d9020f0b81bb2a961f4d1d3865631f9eb1cb98] <==
	{"level":"warn","ts":"2025-10-18T12:01:45.764585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.783426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.808895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.827272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.843184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.867405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.921253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:45.957774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:46.020579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:01:46.183723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:04.903202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:04.925269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.079356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.114347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.147588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.173656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.239520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.269796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.297409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.322343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.342374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.368579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.388390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.403048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:02:24.420470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40092","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:08:16 up 13:50,  0 user,  load average: 0.83, 1.55, 2.38
	Linux addons-897172 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ae55d5b3011674645856125c077c0b37c32b369b9d48901bc0f2b10e818a5d03] <==
	I1018 12:06:16.515066       1 main.go:301] handling current node
	I1018 12:06:26.515982       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:06:26.516040       1 main.go:301] handling current node
	I1018 12:06:36.515248       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:06:36.515366       1 main.go:301] handling current node
	I1018 12:06:46.521194       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:06:46.521230       1 main.go:301] handling current node
	I1018 12:06:56.514042       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:06:56.514102       1 main.go:301] handling current node
	I1018 12:07:06.515997       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:07:06.516101       1 main.go:301] handling current node
	I1018 12:07:16.515121       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:07:16.515155       1 main.go:301] handling current node
	I1018 12:07:26.521272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:07:26.521309       1 main.go:301] handling current node
	I1018 12:07:36.518053       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:07:36.518092       1 main.go:301] handling current node
	I1018 12:07:46.519975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:07:46.520011       1 main.go:301] handling current node
	I1018 12:07:56.514082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:07:56.514136       1 main.go:301] handling current node
	I1018 12:08:06.516378       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:08:06.516582       1 main.go:301] handling current node
	I1018 12:08:16.514386       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:08:16.514423       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a4a3de681e4e8ff78c7f0626b2e00e1dca908b684158845b5a0598ddecd97b44] <==
	W1018 12:04:17.712049       1 cacher.go:182] Terminating all watchers from cacher hypernodes.topology.volcano.sh
	W1018 12:04:18.617150       1 cacher.go:182] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	W1018 12:04:18.752066       1 cacher.go:182] Terminating all watchers from cacher jobflows.flow.volcano.sh
	E1018 12:04:36.296356       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33584: use of closed network connection
	E1018 12:04:36.552640       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33612: use of closed network connection
	E1018 12:04:36.688215       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:33634: use of closed network connection
	I1018 12:04:46.787254       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.253.206"}
	I1018 12:05:38.581797       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1018 12:05:40.813963       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1018 12:05:48.867172       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1018 12:05:57.102952       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.102999       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.126737       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.128261       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.139026       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.139069       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.164107       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.164157       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1018 12:05:57.193879       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1018 12:05:57.195673       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1018 12:05:58.127775       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1018 12:05:58.194341       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1018 12:05:58.306383       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1018 12:06:04.563228       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 12:06:04.832121       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.67.8"}
	
	
	==> kube-controller-manager [5f5b20ddb03b78b97b34dae7af991cacd7a4814b2e47d6f00498550e3a948b41] <==
	E1018 12:07:27.859058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:07:38.649439       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:07:38.650559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:07:42.592970       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:07:42.594256       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:07:44.310670       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:07:44.311712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:07:52.414246       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:07:52.415233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:07:56.941932       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:07:56.944272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:08:00.129844       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:08:00.133591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:08:02.037954       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:08:02.039196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:08:03.081087       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:08:03.082241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:08:03.275561       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:08:03.276839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:08:13.161915       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:08:13.163007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:08:13.858659       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:08:13.859820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1018 12:08:15.232617       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1018 12:08:15.233680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [0f2a5a2b37744d869e19d0c2f143c407ed44b5af5d0e9ff2e2e66ed49f58124f] <==
	I1018 12:01:56.278233       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:01:56.385311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:01:56.485817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:01:56.485854       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:01:56.485931       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:01:56.520377       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:01:56.520428       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:01:56.612820       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:01:56.613137       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:01:56.613152       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:01:56.614559       1 config.go:200] "Starting service config controller"
	I1018 12:01:56.614568       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:01:56.614584       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:01:56.614588       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:01:56.614598       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:01:56.614608       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:01:56.618429       1 config.go:309] "Starting node config controller"
	I1018 12:01:56.618452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:01:56.618476       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:01:56.715939       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:01:56.715961       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:01:56.715931       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [eed961508f62df2082fd87bc190e9e45a0d98f76c26c34aabd2e3a5140f8463e] <==
	E1018 12:01:47.170669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:01:47.170735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:01:47.172008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:01:47.172243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:01:47.172299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:01:47.172335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:01:47.172370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:01:47.175927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:01:47.189630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:01:47.189827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:01:47.986084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:01:48.034367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:01:48.046700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:01:48.139900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:01:48.160417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:01:48.192851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:01:48.270932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:01:48.343558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:01:48.345092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:01:48.356568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:01:48.380141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:01:48.433685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:01:48.475010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:01:48.484565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1018 12:01:49.942945       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:06:49 addons-897172 kubelet[1481]: E1018 12:06:49.320576    1481 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(69e78953-0244-4b1b-b6b5-2de0b5385adf): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:06:49 addons-897172 kubelet[1481]: E1018 12:06:49.320629    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:06:51 addons-897172 kubelet[1481]: I1018 12:06:51.215965    1481 scope.go:117] "RemoveContainer" containerID="d8197d3392c1297a628cbea47e14e0968109e1125d735db04488b9a9163f7208"
	Oct 18 12:06:55 addons-897172 kubelet[1481]: I1018 12:06:55.915568    1481 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:06:55 addons-897172 kubelet[1481]: E1018 12:06:55.916330    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-b6zx6" podUID="c32793b2-06b3-4b42-9b78-938c01bcfd38"
	Oct 18 12:07:00 addons-897172 kubelet[1481]: E1018 12:07:00.916494    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:07:00 addons-897172 kubelet[1481]: E1018 12:07:00.916512    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:07:12 addons-897172 kubelet[1481]: E1018 12:07:12.916290    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:07:15 addons-897172 kubelet[1481]: E1018 12:07:15.916027    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:07:23 addons-897172 kubelet[1481]: E1018 12:07:23.916460    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:07:31 addons-897172 kubelet[1481]: E1018 12:07:31.349076    1481 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 12:07:31 addons-897172 kubelet[1481]: E1018 12:07:31.349143    1481 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:alpine"
	Oct 18 12:07:31 addons-897172 kubelet[1481]: E1018 12:07:31.349226    1481 kuberuntime_manager.go:1449] "Unhandled Error" err="container nginx start failed in pod nginx_default(69e78953-0244-4b1b-b6b5-2de0b5385adf): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:07:31 addons-897172 kubelet[1481]: E1018 12:07:31.349265    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:07:35 addons-897172 kubelet[1481]: E1018 12:07:35.919065    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:07:46 addons-897172 kubelet[1481]: E1018 12:07:46.916156    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:07:49 addons-897172 kubelet[1481]: E1018 12:07:49.917270    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:07:59 addons-897172 kubelet[1481]: E1018 12:07:59.917798    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:08:02 addons-897172 kubelet[1481]: E1018 12:08:02.340810    1481 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 18 12:08:02 addons-897172 kubelet[1481]: E1018 12:08:02.340868    1481 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Oct 18 12:08:02 addons-897172 kubelet[1481]: E1018 12:08:02.340937    1481 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(197fd552-3e3a-410b-910a-4e3b17e76bd5): ErrImagePull: failed to pull and unpack image \"docker.io/library/busybox:stable\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:08:02 addons-897172 kubelet[1481]: E1018 12:08:02.340974    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	Oct 18 12:08:10 addons-897172 kubelet[1481]: I1018 12:08:10.915994    1481 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 12:08:12 addons-897172 kubelet[1481]: E1018 12:08:12.916171    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="69e78953-0244-4b1b-b6b5-2de0b5385adf"
	Oct 18 12:08:15 addons-897172 kubelet[1481]: E1018 12:08:15.927942    1481 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/busybox:stable\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="197fd552-3e3a-410b-910a-4e3b17e76bd5"
	
	
	==> storage-provisioner [2d600e8cf22c791fa7ffc6ec034cffef3fa5102dfd75225ce6fa10114b83e94b] <==
	W1018 12:07:52.243800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:07:54.246972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:07:54.254332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:07:56.257666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:07:56.262233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:07:58.265592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:07:58.269418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:00.275096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:00.290106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:02.295825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:02.303785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:04.306362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:04.310803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:06.314807       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:06.319301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:08.323920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:08.328265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:10.331308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:10.337984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:12.341199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:12.347327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:14.350145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:14.354516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:16.358035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:08:16.362505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-897172 -n addons-897172
helpers_test.go:269: (dbg) Run:  kubectl --context addons-897172 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg registry-creds-764b6fb674-b6zx6
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-897172 describe pod nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg registry-creds-764b6fb674-b6zx6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-897172 describe pod nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg registry-creds-764b6fb674-b6zx6: exit status 1 (112.308402ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-897172/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:06:04 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.34
	IPs:
	  IP:  10.244.0.34
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2sf6x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2sf6x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m13s                default-scheduler  Successfully assigned default/nginx to addons-897172
	  Warning  Failed     2m12s                kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    47s (x4 over 2m12s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     46s (x4 over 2m12s)  kubelet            Error: ErrImagePull
	  Warning  Failed     46s (x3 over 116s)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x7 over 2m12s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     5s (x7 over 2m12s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-897172/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:05:14 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.31
	IPs:
	  IP:  10.244.0.31
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvdh2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-kvdh2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  3m3s                default-scheduler  Successfully assigned default/test-local-path to addons-897172
	  Normal   Pulling    16s (x5 over 3m2s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     15s (x5 over 3m2s)  kubelet            Failed to pull image "busybox:stable": failed to pull and unpack image "docker.io/library/busybox:stable": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/busybox/manifests/sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     15s (x5 over 3m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x11 over 3m2s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     2s (x11 over 3m2s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-kx9wc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xmghg" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-b6zx6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-897172 describe pod nginx test-local-path ingress-nginx-admission-create-kx9wc ingress-nginx-admission-patch-xmghg registry-creds-764b6fb674-b6zx6: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.835629241s)
--- FAIL: TestAddons/parallel/LocalPath (230.48s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.39s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-126335 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-126335 --driver=docker  --container-runtime=containerd: (30.325365993s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-126335"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-126335": (1.137754062s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rqH8JtdsN6JB/agent.2101720" SSH_AGENT_PID="2101721" DOCKER_HOST=ssh://docker@127.0.0.1:35699 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rqH8JtdsN6JB/agent.2101720" SSH_AGENT_PID="2101721" DOCKER_HOST=ssh://docker@127.0.0.1:35699 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Non-zero exit: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rqH8JtdsN6JB/agent.2101720" SSH_AGENT_PID="2101721" DOCKER_HOST=ssh://docker@127.0.0.1:35699 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": exit status 1 (902.329315ms)

                                                
                                                
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:245: failed to build images, error: exit status 1, output:
-- stdout --
	Sending build context to Docker daemon  2.048kB

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
	            environment-variable.
	
	Error response from daemon: exit status 1

                                                
                                                
** /stderr **
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-rqH8JtdsN6JB/agent.2101720" SSH_AGENT_PID="2101721" DOCKER_HOST=ssh://docker@127.0.0.1:35699 docker image ls"
docker_test.go:255: failed to detect image 'local/minikube-dockerenv-containerd-test' in output of docker image ls
panic.go:636: *** TestDockerEnvContainerd FAILED at 2025-10-18 12:15:19.567659217 +0000 UTC m=+878.825297384
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestDockerEnvContainerd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestDockerEnvContainerd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect dockerenv-126335
helpers_test.go:243: (dbg) docker inspect dockerenv-126335:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6d43f9ec00cc7f810ff0f9855db82f577548b79b116fe88bd1f22f934c0b5678",
	        "Created": "2025-10-18T12:14:41.187299686Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2099390,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:14:41.25339931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/6d43f9ec00cc7f810ff0f9855db82f577548b79b116fe88bd1f22f934c0b5678/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6d43f9ec00cc7f810ff0f9855db82f577548b79b116fe88bd1f22f934c0b5678/hostname",
	        "HostsPath": "/var/lib/docker/containers/6d43f9ec00cc7f810ff0f9855db82f577548b79b116fe88bd1f22f934c0b5678/hosts",
	        "LogPath": "/var/lib/docker/containers/6d43f9ec00cc7f810ff0f9855db82f577548b79b116fe88bd1f22f934c0b5678/6d43f9ec00cc7f810ff0f9855db82f577548b79b116fe88bd1f22f934c0b5678-json.log",
	        "Name": "/dockerenv-126335",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "dockerenv-126335:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "dockerenv-126335",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6d43f9ec00cc7f810ff0f9855db82f577548b79b116fe88bd1f22f934c0b5678",
	                "LowerDir": "/var/lib/docker/overlay2/9344bfc2d79992019fdc24dc2ba830d4ab9936c8bca2ced8c4b05d1d6f17e82b-init/diff:/var/lib/docker/overlay2/647b2423f8222638985dff90791465004ec84c7fd61ca3830bba92bce09f80ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9344bfc2d79992019fdc24dc2ba830d4ab9936c8bca2ced8c4b05d1d6f17e82b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9344bfc2d79992019fdc24dc2ba830d4ab9936c8bca2ced8c4b05d1d6f17e82b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9344bfc2d79992019fdc24dc2ba830d4ab9936c8bca2ced8c4b05d1d6f17e82b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "dockerenv-126335",
	                "Source": "/var/lib/docker/volumes/dockerenv-126335/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "dockerenv-126335",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "dockerenv-126335",
	                "name.minikube.sigs.k8s.io": "dockerenv-126335",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c1c9d58324164c211930ece826eed369a9601161bdc3156b304f4bca464dcd7",
	            "SandboxKey": "/var/run/docker/netns/3c1c9d583241",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35699"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35700"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35703"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35701"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35702"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "dockerenv-126335": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:7a:7a:db:98:63",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab44fd037a045ebcf5d068f6e4c574f90a22cf9420bef23a55fd5ae475f5b773",
	                    "EndpointID": "8cf5428ebb91c12319c905447d7cdd762797fd6f61d0fdc692467fbd6d2ef80c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "dockerenv-126335",
	                        "6d43f9ec00cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p dockerenv-126335 -n dockerenv-126335
helpers_test.go:252: <<< TestDockerEnvContainerd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestDockerEnvContainerd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p dockerenv-126335 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p dockerenv-126335 logs -n 25: (1.396808243s)
helpers_test.go:260: TestDockerEnvContainerd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬─────────────────────────────────────────────────────────────────────────────────┬──────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                      ARGS                                       │     PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────┼─────────────────────────────────────────────────────────────────────────────────┼──────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons     │ addons-897172 addons disable volcano --alsologtostderr -v=1                     │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons     │ addons-897172 addons disable gcp-auth --alsologtostderr -v=1                    │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons     │ enable headlamp -p addons-897172 --alsologtostderr -v=1                         │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:04 UTC │
	│ addons     │ addons-897172 addons disable headlamp --alsologtostderr -v=1                    │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:04 UTC │ 18 Oct 25 12:05 UTC │
	│ ip         │ addons-897172 ip                                                                │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons     │ addons-897172 addons disable registry --alsologtostderr -v=1                    │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons     │ addons-897172 addons disable yakd --alsologtostderr -v=1                        │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons     │ addons-897172 addons disable nvidia-device-plugin --alsologtostderr -v=1        │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons     │ addons-897172 addons disable cloud-spanner --alsologtostderr -v=1               │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons     │ addons-897172 addons disable metrics-server --alsologtostderr -v=1              │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons     │ addons-897172 addons disable volumesnapshots --alsologtostderr -v=1             │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:05 UTC │
	│ addons     │ addons-897172 addons disable csi-hostpath-driver --alsologtostderr -v=1         │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:05 UTC │ 18 Oct 25 12:06 UTC │
	│ addons     │ addons-897172 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:08 UTC │ 18 Oct 25 12:09 UTC │
	│ addons     │ addons-897172 addons disable inspektor-gadget --alsologtostderr -v=1            │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ addons     │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-897172  │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ addons     │ addons-897172 addons disable registry-creds --alsologtostderr -v=1              │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:09 UTC │ 18 Oct 25 12:09 UTC │
	│ addons     │ addons-897172 addons disable ingress-dns --alsologtostderr -v=1                 │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ addons     │ addons-897172 addons disable ingress --alsologtostderr -v=1                     │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ stop       │ -p addons-897172                                                                │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ addons     │ enable dashboard -p addons-897172                                               │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ addons     │ disable dashboard -p addons-897172                                              │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ addons     │ disable gvisor -p addons-897172                                                 │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ delete     │ -p addons-897172                                                                │ addons-897172    │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:14 UTC │
	│ start      │ -p dockerenv-126335 --driver=docker  --container-runtime=containerd             │ dockerenv-126335 │ jenkins │ v1.37.0 │ 18 Oct 25 12:14 UTC │ 18 Oct 25 12:15 UTC │
	│ docker-env │ --ssh-host --ssh-add -p dockerenv-126335                                        │ dockerenv-126335 │ jenkins │ v1.37.0 │ 18 Oct 25 12:15 UTC │ 18 Oct 25 12:15 UTC │
	└────────────┴─────────────────────────────────────────────────────────────────────────────────┴──────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:14:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:14:35.814400 2099001 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:14:35.814528 2099001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:14:35.814532 2099001 out.go:374] Setting ErrFile to fd 2...
	I1018 12:14:35.814537 2099001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:14:35.814800 2099001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:14:35.815257 2099001 out.go:368] Setting JSON to false
	I1018 12:14:35.816166 2099001 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":50223,"bootTime":1760739453,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:14:35.816227 2099001 start.go:141] virtualization:  
	I1018 12:14:35.823352 2099001 out.go:179] * [dockerenv-126335] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:14:35.827098 2099001 notify.go:220] Checking for updates...
	I1018 12:14:35.831360 2099001 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:14:35.835021 2099001 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:14:35.838438 2099001 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:14:35.841649 2099001 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:14:35.844997 2099001 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:14:35.848065 2099001 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:14:35.851458 2099001 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:14:35.874912 2099001 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:14:35.875043 2099001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:14:35.939087 2099001 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:14:35.929769955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:14:35.939182 2099001 docker.go:318] overlay module found
	I1018 12:14:35.943993 2099001 out.go:179] * Using the docker driver based on user configuration
	I1018 12:14:35.946825 2099001 start.go:305] selected driver: docker
	I1018 12:14:35.946842 2099001 start.go:925] validating driver "docker" against <nil>
	I1018 12:14:35.946856 2099001 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:14:35.946964 2099001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:14:36.010633 2099001 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-18 12:14:35.999503363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:14:36.010801 2099001 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:14:36.011095 2099001 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 12:14:36.011318 2099001 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 12:14:36.014242 2099001 out.go:179] * Using Docker driver with root privileges
	I1018 12:14:36.017121 2099001 cni.go:84] Creating CNI manager for ""
	I1018 12:14:36.017186 2099001 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:14:36.017202 2099001 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:14:36.017285 2099001 start.go:349] cluster config:
	{Name:dockerenv-126335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-126335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:14:36.020306 2099001 out.go:179] * Starting "dockerenv-126335" primary control-plane node in "dockerenv-126335" cluster
	I1018 12:14:36.023163 2099001 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1018 12:14:36.026073 2099001 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:14:36.029017 2099001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:14:36.029085 2099001 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1018 12:14:36.029097 2099001 cache.go:58] Caching tarball of preloaded images
	I1018 12:14:36.029104 2099001 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:14:36.029199 2099001 preload.go:233] Found /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 12:14:36.029208 2099001 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1018 12:14:36.029525 2099001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/config.json ...
	I1018 12:14:36.029544 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/config.json: {Name:mk232a37ee1096fe758e0e64aa1362b1bf3e57c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:36.053505 2099001 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:14:36.053517 2099001 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:14:36.053536 2099001 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:14:36.053570 2099001 start.go:360] acquireMachinesLock for dockerenv-126335: {Name:mk007fb829757ab6c04a22473bc12d4d062502b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:14:36.053684 2099001 start.go:364] duration metric: took 99.813µs to acquireMachinesLock for "dockerenv-126335"
	I1018 12:14:36.053710 2099001 start.go:93] Provisioning new machine with config: &{Name:dockerenv-126335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-126335 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1018 12:14:36.053777 2099001 start.go:125] createHost starting for "" (driver="docker")
	I1018 12:14:36.057088 2099001 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 12:14:36.057307 2099001 start.go:159] libmachine.API.Create for "dockerenv-126335" (driver="docker")
	I1018 12:14:36.057338 2099001 client.go:168] LocalClient.Create starting
	I1018 12:14:36.057404 2099001 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem
	I1018 12:14:36.057436 2099001 main.go:141] libmachine: Decoding PEM data...
	I1018 12:14:36.057448 2099001 main.go:141] libmachine: Parsing certificate...
	I1018 12:14:36.057497 2099001 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem
	I1018 12:14:36.057516 2099001 main.go:141] libmachine: Decoding PEM data...
	I1018 12:14:36.057525 2099001 main.go:141] libmachine: Parsing certificate...
	I1018 12:14:36.057892 2099001 cli_runner.go:164] Run: docker network inspect dockerenv-126335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 12:14:36.074204 2099001 cli_runner.go:211] docker network inspect dockerenv-126335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 12:14:36.074288 2099001 network_create.go:284] running [docker network inspect dockerenv-126335] to gather additional debugging logs...
	I1018 12:14:36.074303 2099001 cli_runner.go:164] Run: docker network inspect dockerenv-126335
	W1018 12:14:36.090694 2099001 cli_runner.go:211] docker network inspect dockerenv-126335 returned with exit code 1
	I1018 12:14:36.090714 2099001 network_create.go:287] error running [docker network inspect dockerenv-126335]: docker network inspect dockerenv-126335: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network dockerenv-126335 not found
	I1018 12:14:36.090727 2099001 network_create.go:289] output of [docker network inspect dockerenv-126335]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network dockerenv-126335 not found
	
	** /stderr **
	I1018 12:14:36.090818 2099001 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:14:36.107169 2099001 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400195d460}
	I1018 12:14:36.107197 2099001 network_create.go:124] attempt to create docker network dockerenv-126335 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 12:14:36.107253 2099001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=dockerenv-126335 dockerenv-126335
	I1018 12:14:36.163234 2099001 network_create.go:108] docker network dockerenv-126335 192.168.49.0/24 created
	I1018 12:14:36.163256 2099001 kic.go:121] calculated static IP "192.168.49.2" for the "dockerenv-126335" container
	I1018 12:14:36.163342 2099001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 12:14:36.178008 2099001 cli_runner.go:164] Run: docker volume create dockerenv-126335 --label name.minikube.sigs.k8s.io=dockerenv-126335 --label created_by.minikube.sigs.k8s.io=true
	I1018 12:14:36.201075 2099001 oci.go:103] Successfully created a docker volume dockerenv-126335
	I1018 12:14:36.201168 2099001 cli_runner.go:164] Run: docker run --rm --name dockerenv-126335-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-126335 --entrypoint /usr/bin/test -v dockerenv-126335:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 12:14:36.756315 2099001 oci.go:107] Successfully prepared a docker volume dockerenv-126335
	I1018 12:14:36.756348 2099001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:14:36.756366 2099001 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 12:14:36.756430 2099001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-126335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 12:14:41.112100 2099001 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v dockerenv-126335:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.35563452s)
	I1018 12:14:41.112120 2099001 kic.go:203] duration metric: took 4.355750742s to extract preloaded images to volume ...
	W1018 12:14:41.112518 2099001 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 12:14:41.112617 2099001 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 12:14:41.173158 2099001 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname dockerenv-126335 --name dockerenv-126335 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=dockerenv-126335 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=dockerenv-126335 --network dockerenv-126335 --ip 192.168.49.2 --volume dockerenv-126335:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 12:14:41.495899 2099001 cli_runner.go:164] Run: docker container inspect dockerenv-126335 --format={{.State.Running}}
	I1018 12:14:41.516372 2099001 cli_runner.go:164] Run: docker container inspect dockerenv-126335 --format={{.State.Status}}
	I1018 12:14:41.541216 2099001 cli_runner.go:164] Run: docker exec dockerenv-126335 stat /var/lib/dpkg/alternatives/iptables
	I1018 12:14:41.594556 2099001 oci.go:144] the created container "dockerenv-126335" has a running status.
	I1018 12:14:41.594576 2099001 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa...
	I1018 12:14:42.881534 2099001 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 12:14:42.902283 2099001 cli_runner.go:164] Run: docker container inspect dockerenv-126335 --format={{.State.Status}}
	I1018 12:14:42.921153 2099001 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 12:14:42.921164 2099001 kic_runner.go:114] Args: [docker exec --privileged dockerenv-126335 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 12:14:42.973685 2099001 cli_runner.go:164] Run: docker container inspect dockerenv-126335 --format={{.State.Status}}
	I1018 12:14:42.992601 2099001 machine.go:93] provisionDockerMachine start ...
	I1018 12:14:42.992702 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:43.013758 2099001 main.go:141] libmachine: Using SSH client type: native
	I1018 12:14:43.014141 2099001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35699 <nil> <nil>}
	I1018 12:14:43.014149 2099001 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:14:43.171953 2099001 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-126335
	
	I1018 12:14:43.171966 2099001 ubuntu.go:182] provisioning hostname "dockerenv-126335"
	I1018 12:14:43.172036 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:43.191801 2099001 main.go:141] libmachine: Using SSH client type: native
	I1018 12:14:43.192200 2099001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35699 <nil> <nil>}
	I1018 12:14:43.192220 2099001 main.go:141] libmachine: About to run SSH command:
	sudo hostname dockerenv-126335 && echo "dockerenv-126335" | sudo tee /etc/hostname
	I1018 12:14:43.359129 2099001 main.go:141] libmachine: SSH cmd err, output: <nil>: dockerenv-126335
	
	I1018 12:14:43.359197 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:43.380792 2099001 main.go:141] libmachine: Using SSH client type: native
	I1018 12:14:43.381096 2099001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35699 <nil> <nil>}
	I1018 12:14:43.381120 2099001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdockerenv-126335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 dockerenv-126335/g' /etc/hosts;
				else 
					echo '127.0.1.1 dockerenv-126335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:14:43.528240 2099001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:14:43.528257 2099001 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-2075029/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-2075029/.minikube}
	I1018 12:14:43.528272 2099001 ubuntu.go:190] setting up certificates
	I1018 12:14:43.528280 2099001 provision.go:84] configureAuth start
	I1018 12:14:43.528340 2099001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-126335
	I1018 12:14:43.545139 2099001 provision.go:143] copyHostCerts
	I1018 12:14:43.545197 2099001 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem, removing ...
	I1018 12:14:43.545211 2099001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem
	I1018 12:14:43.545293 2099001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem (1675 bytes)
	I1018 12:14:43.545390 2099001 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem, removing ...
	I1018 12:14:43.545393 2099001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem
	I1018 12:14:43.545418 2099001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem (1078 bytes)
	I1018 12:14:43.545466 2099001 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem, removing ...
	I1018 12:14:43.545469 2099001 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem
	I1018 12:14:43.545490 2099001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem (1123 bytes)
	I1018 12:14:43.545534 2099001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem org=jenkins.dockerenv-126335 san=[127.0.0.1 192.168.49.2 dockerenv-126335 localhost minikube]
	I1018 12:14:43.792104 2099001 provision.go:177] copyRemoteCerts
	I1018 12:14:43.792155 2099001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:14:43.792199 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:43.808590 2099001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35699 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa Username:docker}
	I1018 12:14:43.911327 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:14:43.929963 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1018 12:14:43.946592 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 12:14:43.963653 2099001 provision.go:87] duration metric: took 435.352372ms to configureAuth
	I1018 12:14:43.963671 2099001 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:14:43.963867 2099001 config.go:182] Loaded profile config "dockerenv-126335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:14:43.963873 2099001 machine.go:96] duration metric: took 971.26151ms to provisionDockerMachine
	I1018 12:14:43.963878 2099001 client.go:171] duration metric: took 7.906535553s to LocalClient.Create
	I1018 12:14:43.963889 2099001 start.go:167] duration metric: took 7.90658434s to libmachine.API.Create "dockerenv-126335"
	I1018 12:14:43.963895 2099001 start.go:293] postStartSetup for "dockerenv-126335" (driver="docker")
	I1018 12:14:43.963908 2099001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:14:43.963964 2099001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:14:43.963998 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:43.980706 2099001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35699 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa Username:docker}
	I1018 12:14:44.088266 2099001 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:14:44.091609 2099001 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:14:44.091627 2099001 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:14:44.091637 2099001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/addons for local assets ...
	I1018 12:14:44.091703 2099001 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/files for local assets ...
	I1018 12:14:44.091724 2099001 start.go:296] duration metric: took 127.824634ms for postStartSetup
	I1018 12:14:44.092073 2099001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-126335
	I1018 12:14:44.109023 2099001 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/config.json ...
	I1018 12:14:44.109304 2099001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:14:44.109358 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:44.125730 2099001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35699 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa Username:docker}
	I1018 12:14:44.224899 2099001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:14:44.229349 2099001 start.go:128] duration metric: took 8.175558033s to createHost
	I1018 12:14:44.229364 2099001 start.go:83] releasing machines lock for "dockerenv-126335", held for 8.175672433s
	I1018 12:14:44.229429 2099001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" dockerenv-126335
	I1018 12:14:44.246154 2099001 ssh_runner.go:195] Run: cat /version.json
	I1018 12:14:44.246194 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:44.246228 2099001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:14:44.246279 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:14:44.264090 2099001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35699 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa Username:docker}
	I1018 12:14:44.281708 2099001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35699 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa Username:docker}
	I1018 12:14:44.494550 2099001 ssh_runner.go:195] Run: systemctl --version
	I1018 12:14:44.500862 2099001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:14:44.504995 2099001 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:14:44.505055 2099001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:14:44.535994 2099001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 12:14:44.536007 2099001 start.go:495] detecting cgroup driver to use...
	I1018 12:14:44.536038 2099001 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:14:44.536092 2099001 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1018 12:14:44.551309 2099001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:14:44.563984 2099001 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:14:44.564035 2099001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:14:44.581345 2099001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:14:44.599360 2099001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:14:44.715879 2099001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:14:44.839532 2099001 docker.go:234] disabling docker service ...
	I1018 12:14:44.839589 2099001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:14:44.860747 2099001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:14:44.874187 2099001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:14:45.000386 2099001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:14:45.272169 2099001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:14:45.291284 2099001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:14:45.313205 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:14:45.330899 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:14:45.342784 2099001 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:14:45.342858 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:14:45.358141 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:14:45.382574 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:14:45.397488 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:14:45.410254 2099001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:14:45.420517 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:14:45.430391 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:14:45.442464 2099001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:14:45.457677 2099001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:14:45.467511 2099001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:14:45.476842 2099001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:14:45.614834 2099001 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:14:45.745878 2099001 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1018 12:14:45.745937 2099001 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1018 12:14:45.749875 2099001 start.go:563] Will wait 60s for crictl version
	I1018 12:14:45.749930 2099001 ssh_runner.go:195] Run: which crictl
	I1018 12:14:45.753550 2099001 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:14:45.778493 2099001 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1018 12:14:45.778558 2099001 ssh_runner.go:195] Run: containerd --version
	I1018 12:14:45.804574 2099001 ssh_runner.go:195] Run: containerd --version
	I1018 12:14:45.827239 2099001 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1018 12:14:45.830331 2099001 cli_runner.go:164] Run: docker network inspect dockerenv-126335 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:14:45.847084 2099001 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:14:45.851035 2099001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:14:45.860978 2099001 kubeadm.go:883] updating cluster {Name:dockerenv-126335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-126335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:14:45.861071 2099001 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:14:45.861139 2099001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:14:45.885977 2099001 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:14:45.885989 2099001 containerd.go:534] Images already preloaded, skipping extraction
	I1018 12:14:45.886052 2099001 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:14:45.910590 2099001 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:14:45.910602 2099001 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:14:45.910608 2099001 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 containerd true true} ...
	I1018 12:14:45.910710 2099001 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=dockerenv-126335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:dockerenv-126335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:14:45.910776 2099001 ssh_runner.go:195] Run: sudo crictl info
	I1018 12:14:45.937547 2099001 cni.go:84] Creating CNI manager for ""
	I1018 12:14:45.937556 2099001 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:14:45.937570 2099001 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:14:45.937592 2099001 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:dockerenv-126335 NodeName:dockerenv-126335 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:14:45.937709 2099001 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "dockerenv-126335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:14:45.937773 2099001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:14:45.946546 2099001 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:14:45.946614 2099001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:14:45.954205 2099001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1018 12:14:45.966428 2099001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:14:45.979026 2099001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1018 12:14:45.991232 2099001 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:14:45.995659 2099001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 12:14:46.006674 2099001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:14:46.136457 2099001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:14:46.153840 2099001 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335 for IP: 192.168.49.2
	I1018 12:14:46.153851 2099001 certs.go:195] generating shared ca certs ...
	I1018 12:14:46.153865 2099001 certs.go:227] acquiring lock for ca certs: {Name:mkb3a5ce8c0a7d3b9a246d80f0747d48f33f9661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:46.154007 2099001 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key
	I1018 12:14:46.154042 2099001 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key
	I1018 12:14:46.154048 2099001 certs.go:257] generating profile certs ...
	I1018 12:14:46.154103 2099001 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/client.key
	I1018 12:14:46.154112 2099001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/client.crt with IP's: []
	I1018 12:14:46.377989 2099001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/client.crt ...
	I1018 12:14:46.378005 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/client.crt: {Name:mk26ec262c1bd978cba55f8b47e16c4a8e8b0909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:46.378210 2099001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/client.key ...
	I1018 12:14:46.378216 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/client.key: {Name:mke841cf3f4f0387635c1f8c0e6bdeba9d7d1f04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:46.378306 2099001 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.key.94f23ead
	I1018 12:14:46.378318 2099001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.crt.94f23ead with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 12:14:46.749536 2099001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.crt.94f23ead ...
	I1018 12:14:46.749557 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.crt.94f23ead: {Name:mkffe214dec66117b01b9f16988d382a4b7387cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:46.749754 2099001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.key.94f23ead ...
	I1018 12:14:46.749763 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.key.94f23ead: {Name:mk8af492578bc5a7e1fbead93cc052d0a2c633e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:46.749850 2099001 certs.go:382] copying /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.crt.94f23ead -> /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.crt
	I1018 12:14:46.749923 2099001 certs.go:386] copying /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.key.94f23ead -> /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.key
	I1018 12:14:46.749975 2099001 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.key
	I1018 12:14:46.749986 2099001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.crt with IP's: []
	I1018 12:14:46.829859 2099001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.crt ...
	I1018 12:14:46.829874 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.crt: {Name:mk85ef5c3aec06b4b099d782c14556730691cedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:46.830050 2099001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.key ...
	I1018 12:14:46.830058 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.key: {Name:mk6fd5c904b4f6fa9b39acfb03708a46a111ffe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:14:46.830244 2099001 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:14:46.830276 2099001 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:14:46.830299 2099001 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:14:46.830320 2099001 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem (1675 bytes)
	I1018 12:14:46.830846 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:14:46.850067 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:14:46.869661 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:14:46.887951 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:14:46.916537 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:14:46.938756 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 12:14:46.959110 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:14:46.978467 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/dockerenv-126335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:14:46.994790 2099001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:14:47.014450 2099001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:14:47.027156 2099001 ssh_runner.go:195] Run: openssl version
	I1018 12:14:47.036039 2099001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:14:47.045249 2099001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:14:47.048929 2099001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:14:47.048990 2099001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:14:47.091341 2099001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:14:47.099469 2099001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:14:47.102910 2099001 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 12:14:47.102956 2099001 kubeadm.go:400] StartCluster: {Name:dockerenv-126335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:dockerenv-126335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:14:47.103021 2099001 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1018 12:14:47.103082 2099001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:14:47.127704 2099001 cri.go:89] found id: ""
	I1018 12:14:47.127799 2099001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:14:47.135800 2099001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:14:47.143369 2099001 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 12:14:47.143422 2099001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:14:47.151087 2099001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 12:14:47.151097 2099001 kubeadm.go:157] found existing configuration files:
	
	I1018 12:14:47.151150 2099001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 12:14:47.159206 2099001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 12:14:47.159268 2099001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 12:14:47.166660 2099001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 12:14:47.174467 2099001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 12:14:47.174530 2099001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:14:47.182453 2099001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 12:14:47.190354 2099001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 12:14:47.190415 2099001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:14:47.198345 2099001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 12:14:47.206220 2099001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 12:14:47.206276 2099001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:14:47.213840 2099001 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 12:14:47.256759 2099001 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 12:14:47.257097 2099001 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 12:14:47.279964 2099001 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 12:14:47.280044 2099001 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 12:14:47.280085 2099001 kubeadm.go:318] OS: Linux
	I1018 12:14:47.280150 2099001 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 12:14:47.280212 2099001 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 12:14:47.280264 2099001 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 12:14:47.280317 2099001 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 12:14:47.280369 2099001 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 12:14:47.280421 2099001 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 12:14:47.280469 2099001 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 12:14:47.280522 2099001 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 12:14:47.280574 2099001 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 12:14:47.356761 2099001 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 12:14:47.356890 2099001 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 12:14:47.357008 2099001 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 12:14:47.364265 2099001 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 12:14:47.370655 2099001 out.go:252]   - Generating certificates and keys ...
	I1018 12:14:47.370767 2099001 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 12:14:47.370843 2099001 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 12:14:47.873726 2099001 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 12:14:48.270675 2099001 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 12:14:48.813377 2099001 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 12:14:49.887184 2099001 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 12:14:50.362457 2099001 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 12:14:50.362750 2099001 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [dockerenv-126335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:14:50.872627 2099001 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 12:14:50.872895 2099001 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [dockerenv-126335 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 12:14:51.759289 2099001 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 12:14:52.165798 2099001 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 12:14:53.053939 2099001 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 12:14:53.054146 2099001 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 12:14:53.451283 2099001 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 12:14:53.626852 2099001 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 12:14:54.403744 2099001 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 12:14:54.565557 2099001 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 12:14:54.689438 2099001 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 12:14:54.690006 2099001 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 12:14:54.693212 2099001 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 12:14:54.696693 2099001 out.go:252]   - Booting up control plane ...
	I1018 12:14:54.696810 2099001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 12:14:54.696894 2099001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 12:14:54.697932 2099001 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 12:14:54.714021 2099001 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 12:14:54.714467 2099001 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 12:14:54.722458 2099001 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 12:14:54.722788 2099001 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 12:14:54.722999 2099001 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 12:14:54.855754 2099001 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 12:14:54.855897 2099001 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 12:14:56.857408 2099001 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001724461s
	I1018 12:14:56.860920 2099001 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 12:14:56.861018 2099001 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 12:14:56.861281 2099001 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 12:14:56.861365 2099001 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 12:14:58.926582 2099001 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.065303795s
	I1018 12:15:01.684146 2099001 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.822958192s
	I1018 12:15:02.863164 2099001 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002122492s
	I1018 12:15:02.884565 2099001 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 12:15:02.902360 2099001 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 12:15:02.915010 2099001 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 12:15:02.915209 2099001 kubeadm.go:318] [mark-control-plane] Marking the node dockerenv-126335 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 12:15:02.927380 2099001 kubeadm.go:318] [bootstrap-token] Using token: fisowz.65a68xlsnnincqcv
	I1018 12:15:02.930304 2099001 out.go:252]   - Configuring RBAC rules ...
	I1018 12:15:02.930440 2099001 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 12:15:02.935198 2099001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 12:15:02.949826 2099001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 12:15:02.955177 2099001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 12:15:02.959623 2099001 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 12:15:02.966026 2099001 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 12:15:03.270400 2099001 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 12:15:03.701536 2099001 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 12:15:04.269900 2099001 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 12:15:04.271090 2099001 kubeadm.go:318] 
	I1018 12:15:04.271158 2099001 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 12:15:04.271162 2099001 kubeadm.go:318] 
	I1018 12:15:04.271241 2099001 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 12:15:04.271245 2099001 kubeadm.go:318] 
	I1018 12:15:04.271270 2099001 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 12:15:04.271331 2099001 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 12:15:04.271382 2099001 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 12:15:04.271386 2099001 kubeadm.go:318] 
	I1018 12:15:04.271441 2099001 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 12:15:04.271445 2099001 kubeadm.go:318] 
	I1018 12:15:04.271496 2099001 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 12:15:04.271499 2099001 kubeadm.go:318] 
	I1018 12:15:04.271552 2099001 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 12:15:04.271634 2099001 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 12:15:04.271704 2099001 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 12:15:04.271708 2099001 kubeadm.go:318] 
	I1018 12:15:04.271795 2099001 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 12:15:04.271917 2099001 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 12:15:04.271921 2099001 kubeadm.go:318] 
	I1018 12:15:04.272008 2099001 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fisowz.65a68xlsnnincqcv \
	I1018 12:15:04.272114 2099001 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6ad86b1276159d70ddf959ffd2834e19bb4d7329ebde5370ec0afcbde1bef9f4 \
	I1018 12:15:04.272134 2099001 kubeadm.go:318] 	--control-plane 
	I1018 12:15:04.272137 2099001 kubeadm.go:318] 
	I1018 12:15:04.272224 2099001 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 12:15:04.272228 2099001 kubeadm.go:318] 
	I1018 12:15:04.272312 2099001 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fisowz.65a68xlsnnincqcv \
	I1018 12:15:04.272416 2099001 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:6ad86b1276159d70ddf959ffd2834e19bb4d7329ebde5370ec0afcbde1bef9f4 
	I1018 12:15:04.276946 2099001 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 12:15:04.277179 2099001 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 12:15:04.277286 2099001 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 12:15:04.277301 2099001 cni.go:84] Creating CNI manager for ""
	I1018 12:15:04.277307 2099001 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:15:04.280514 2099001 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:15:04.283417 2099001 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:15:04.287163 2099001 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:15:04.287172 2099001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:15:04.302234 2099001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:15:04.606013 2099001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:15:04.606143 2099001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 12:15:04.606211 2099001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes dockerenv-126335 minikube.k8s.io/updated_at=2025_10_18T12_15_04_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee minikube.k8s.io/name=dockerenv-126335 minikube.k8s.io/primary=true
	I1018 12:15:04.757296 2099001 ops.go:34] apiserver oom_adj: -16
	I1018 12:15:04.757316 2099001 kubeadm.go:1113] duration metric: took 151.223174ms to wait for elevateKubeSystemPrivileges
	I1018 12:15:04.757327 2099001 kubeadm.go:402] duration metric: took 17.65437538s to StartCluster
	I1018 12:15:04.757341 2099001 settings.go:142] acquiring lock: {Name:mkfe09c4f932c229739f9b782a8232962c7d94cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:15:04.757408 2099001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:15:04.758025 2099001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/kubeconfig: {Name:mkb34a50149724994ca0c2a0fd8679c156671366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:15:04.758228 2099001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1018 12:15:04.758351 2099001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 12:15:04.758626 2099001 config.go:182] Loaded profile config "dockerenv-126335": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:15:04.758654 2099001 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:15:04.758712 2099001 addons.go:69] Setting storage-provisioner=true in profile "dockerenv-126335"
	I1018 12:15:04.758726 2099001 addons.go:238] Setting addon storage-provisioner=true in "dockerenv-126335"
	I1018 12:15:04.758747 2099001 host.go:66] Checking if "dockerenv-126335" exists ...
	I1018 12:15:04.759240 2099001 cli_runner.go:164] Run: docker container inspect dockerenv-126335 --format={{.State.Status}}
	I1018 12:15:04.759523 2099001 addons.go:69] Setting default-storageclass=true in profile "dockerenv-126335"
	I1018 12:15:04.759574 2099001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "dockerenv-126335"
	I1018 12:15:04.759917 2099001 cli_runner.go:164] Run: docker container inspect dockerenv-126335 --format={{.State.Status}}
	I1018 12:15:04.767941 2099001 out.go:179] * Verifying Kubernetes components...
	I1018 12:15:04.770904 2099001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:15:04.808753 2099001 addons.go:238] Setting addon default-storageclass=true in "dockerenv-126335"
	I1018 12:15:04.808778 2099001 host.go:66] Checking if "dockerenv-126335" exists ...
	I1018 12:15:04.809196 2099001 cli_runner.go:164] Run: docker container inspect dockerenv-126335 --format={{.State.Status}}
	I1018 12:15:04.811543 2099001 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:15:04.816540 2099001 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:15:04.816551 2099001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:15:04.816617 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:15:04.842688 2099001 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:15:04.842703 2099001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:15:04.842764 2099001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" dockerenv-126335
	I1018 12:15:04.857689 2099001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35699 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa Username:docker}
	I1018 12:15:04.879205 2099001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35699 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/dockerenv-126335/id_rsa Username:docker}
	I1018 12:15:05.099802 2099001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 12:15:05.143563 2099001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:15:05.167785 2099001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:15:05.238607 2099001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:15:05.513779 2099001 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 12:15:05.515137 2099001 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:15:05.515271 2099001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:15:05.716637 2099001 api_server.go:72] duration metric: took 958.384235ms to wait for apiserver process to appear ...
	I1018 12:15:05.716648 2099001 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:15:05.716677 2099001 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 12:15:05.728058 2099001 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 12:15:05.730272 2099001 api_server.go:141] control plane version: v1.34.1
	I1018 12:15:05.730286 2099001 api_server.go:131] duration metric: took 13.632649ms to wait for apiserver health ...
	I1018 12:15:05.730294 2099001 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:15:05.733802 2099001 system_pods.go:59] 5 kube-system pods found
	I1018 12:15:05.733820 2099001 system_pods.go:61] "etcd-dockerenv-126335" [3eda067c-5879-4c0d-8f2c-37fd70acab45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:15:05.733827 2099001 system_pods.go:61] "kube-apiserver-dockerenv-126335" [0bb26a61-d0d6-432d-acdf-81a799ac96ea] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:15:05.733836 2099001 system_pods.go:61] "kube-controller-manager-dockerenv-126335" [a22f62a8-cf34-4584-b067-e102c55fa549] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:15:05.733842 2099001 system_pods.go:61] "kube-scheduler-dockerenv-126335" [e1787163-9f50-45f5-855f-d4603f638d8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:15:05.733846 2099001 system_pods.go:61] "storage-provisioner" [72855eaa-af4f-4fc1-84ea-42839eab88d6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 12:15:05.733851 2099001 system_pods.go:74] duration metric: took 3.552168ms to wait for pod list to return data ...
	I1018 12:15:05.733860 2099001 kubeadm.go:586] duration metric: took 975.612884ms to wait for: map[apiserver:true system_pods:true]
	I1018 12:15:05.733870 2099001 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:15:05.735021 2099001 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:15:05.736755 2099001 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:15:05.736777 2099001 node_conditions.go:123] node cpu capacity is 2
	I1018 12:15:05.736786 2099001 node_conditions.go:105] duration metric: took 2.912776ms to run NodePressure ...
	I1018 12:15:05.736795 2099001 start.go:241] waiting for startup goroutines ...
	I1018 12:15:05.737987 2099001 addons.go:514] duration metric: took 979.313382ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:15:06.018066 2099001 kapi.go:214] "coredns" deployment in "kube-system" namespace and "dockerenv-126335" context rescaled to 1 replicas
	I1018 12:15:06.018101 2099001 start.go:246] waiting for cluster config update ...
	I1018 12:15:06.018112 2099001 start.go:255] writing updated cluster config ...
	I1018 12:15:06.018419 2099001 ssh_runner.go:195] Run: rm -f paused
	I1018 12:15:06.078833 2099001 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:15:06.082336 2099001 out.go:179] * Done! kubectl is now configured to use "dockerenv-126335" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	5334367724a47       138784d87c9c5       Less than a second ago   Created             coredns                   0                   bf87d7bb16a51       coredns-66bc5c9577-lfpv2                   kube-system
	2eb608a42ce17       ba04bb24b9575       Less than a second ago   Created             storage-provisioner       0                   d0df46dd0e954       storage-provisioner                        kube-system
	33347e8d4cab0       b1a8c6f707935       11 seconds ago           Running             kindnet-cni               0                   1abe28a659f72       kindnet-2fgrz                              kube-system
	c76664a54c46b       05baa95f5142d       11 seconds ago           Running             kube-proxy                0                   5a78aebb331ee       kube-proxy-spvqq                           kube-system
	2baf836b8658d       43911e833d64d       23 seconds ago           Running             kube-apiserver            0                   4d39110c44633       kube-apiserver-dockerenv-126335            kube-system
	b71a0287cfe99       7eb2c6ff0c5a7       23 seconds ago           Running             kube-controller-manager   0                   738e05fdef88e       kube-controller-manager-dockerenv-126335   kube-system
	617570fe11842       b5f57ec6b9867       23 seconds ago           Running             kube-scheduler            0                   c370df7b68518       kube-scheduler-dockerenv-126335            kube-system
	58853751465a3       a1894772a478e       23 seconds ago           Running             etcd                      0                   b0b13e9ad3f19       etcd-dockerenv-126335                      kube-system
	
	
	==> containerd <==
	Oct 18 12:15:08 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:08.045501882Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.107377129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spvqq,Uid:b41f931c-6012-4e51-b7db-d335e572f24f,Namespace:kube-system,Attempt:0,}"
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.112115104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2fgrz,Uid:e65a5297-a110-4217-9feb-53737a0ea6e0,Namespace:kube-system,Attempt:0,}"
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.244330410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-spvqq,Uid:b41f931c-6012-4e51-b7db-d335e572f24f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a78aebb331ee79d3d91b3f81e99ffeedeabe3ffc24284898dc547f7dd5647f3\""
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.252663827Z" level=info msg="CreateContainer within sandbox \"5a78aebb331ee79d3d91b3f81e99ffeedeabe3ffc24284898dc547f7dd5647f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.281357231Z" level=info msg="CreateContainer within sandbox \"5a78aebb331ee79d3d91b3f81e99ffeedeabe3ffc24284898dc547f7dd5647f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c76664a54c46bd57e0c4e759e5604586d555b72215171142bd3dc0567a280c00\""
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.283740438Z" level=info msg="StartContainer for \"c76664a54c46bd57e0c4e759e5604586d555b72215171142bd3dc0567a280c00\""
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.357485460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-2fgrz,Uid:e65a5297-a110-4217-9feb-53737a0ea6e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"1abe28a659f727d647cbc282c9c4a20242201fc1029d08830417b317d312def0\""
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.368698235Z" level=info msg="CreateContainer within sandbox \"1abe28a659f727d647cbc282c9c4a20242201fc1029d08830417b317d312def0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.372418867Z" level=info msg="StartContainer for \"c76664a54c46bd57e0c4e759e5604586d555b72215171142bd3dc0567a280c00\" returns successfully"
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.418278915Z" level=info msg="CreateContainer within sandbox \"1abe28a659f727d647cbc282c9c4a20242201fc1029d08830417b317d312def0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"33347e8d4cab09ae0e67c5e936aa5deec0b4166dcac2712ebe7310e038d46e63\""
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.420058790Z" level=info msg="StartContainer for \"33347e8d4cab09ae0e67c5e936aa5deec0b4166dcac2712ebe7310e038d46e63\""
	Oct 18 12:15:09 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:09.526534902Z" level=info msg="StartContainer for \"33347e8d4cab09ae0e67c5e936aa5deec0b4166dcac2712ebe7310e038d46e63\" returns successfully"
	Oct 18 12:15:19 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:19.913043613Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.400477341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lfpv2,Uid:bb1a4173-f449-42df-bac5-545b292e0d0a,Namespace:kube-system,Attempt:0,}"
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.436128936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:72855eaa-af4f-4fc1-84ea-42839eab88d6,Namespace:kube-system,Attempt:0,}"
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.577813478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:72855eaa-af4f-4fc1-84ea-42839eab88d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0df46dd0e95417b518bfb5a245ef8e3f8dc5df034aaeac2f130aee0b0fef2e9\""
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.594595549Z" level=info msg="CreateContainer within sandbox \"d0df46dd0e95417b518bfb5a245ef8e3f8dc5df034aaeac2f130aee0b0fef2e9\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.606280130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lfpv2,Uid:bb1a4173-f449-42df-bac5-545b292e0d0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf87d7bb16a515f7c2a075faf23c63ca4699e39dbe8eca364b68e567c3b86488\""
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.619675419Z" level=info msg="CreateContainer within sandbox \"bf87d7bb16a515f7c2a075faf23c63ca4699e39dbe8eca364b68e567c3b86488\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.625929112Z" level=info msg="CreateContainer within sandbox \"d0df46dd0e95417b518bfb5a245ef8e3f8dc5df034aaeac2f130aee0b0fef2e9\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"2eb608a42ce17ae7b66b0d4860c4161e38a9982063e18f7d1f4eea765cebd7da\""
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.627220926Z" level=info msg="StartContainer for \"2eb608a42ce17ae7b66b0d4860c4161e38a9982063e18f7d1f4eea765cebd7da\""
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.647585943Z" level=info msg="CreateContainer within sandbox \"bf87d7bb16a515f7c2a075faf23c63ca4699e39dbe8eca364b68e567c3b86488\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5334367724a47bda678b679a0add4f283e2cf7e62934591a534ae022273127a0\""
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.651286210Z" level=info msg="StartContainer for \"5334367724a47bda678b679a0add4f283e2cf7e62934591a534ae022273127a0\""
	Oct 18 12:15:20 dockerenv-126335 containerd[755]: time="2025-10-18T12:15:20.708537825Z" level=info msg="StartContainer for \"2eb608a42ce17ae7b66b0d4860c4161e38a9982063e18f7d1f4eea765cebd7da\" returns successfully"
	
	
	==> describe nodes <==
	Name:               dockerenv-126335
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=dockerenv-126335
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=dockerenv-126335
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_15_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:15:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  dockerenv-126335
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:15:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:15:20 +0000   Sat, 18 Oct 2025 12:14:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:15:20 +0000   Sat, 18 Oct 2025 12:14:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:15:20 +0000   Sat, 18 Oct 2025 12:14:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:15:20 +0000   Sat, 18 Oct 2025 12:15:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    dockerenv-126335
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                5a4dc2d2-e092-40ea-950d-48c0753c395c
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lfpv2                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11s
	  kube-system                 etcd-dockerenv-126335                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17s
	  kube-system                 kindnet-2fgrz                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12s
	  kube-system                 kube-apiserver-dockerenv-126335             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 kube-controller-manager-dockerenv-126335    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 kube-proxy-spvqq                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 kube-scheduler-dockerenv-126335             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11s   kube-proxy       
	  Normal   Starting                 17s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  17s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17s   kubelet          Node dockerenv-126335 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s   kubelet          Node dockerenv-126335 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s   kubelet          Node dockerenv-126335 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13s   node-controller  Node dockerenv-126335 event: Registered Node dockerenv-126335 in Controller
	  Normal   NodeReady                0s    kubelet          Node dockerenv-126335 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 11:37] overlayfs: idmapped layers are currently not supported
	[Oct18 11:38] overlayfs: idmapped layers are currently not supported
	[Oct18 11:40] overlayfs: idmapped layers are currently not supported
	[Oct18 11:42] overlayfs: idmapped layers are currently not supported
	[Oct18 11:43] overlayfs: idmapped layers are currently not supported
	[ +44.292171] overlayfs: idmapped layers are currently not supported
	[  +9.552091] overlayfs: idmapped layers are currently not supported
	[Oct18 11:44] overlayfs: idmapped layers are currently not supported
	[Oct18 11:45] overlayfs: idmapped layers are currently not supported
	[Oct18 11:47] overlayfs: idmapped layers are currently not supported
	[ +55.826989] overlayfs: idmapped layers are currently not supported
	[Oct18 11:48] overlayfs: idmapped layers are currently not supported
	[Oct18 11:49] overlayfs: idmapped layers are currently not supported
	[Oct18 11:50] overlayfs: idmapped layers are currently not supported
	[Oct18 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.885672] overlayfs: idmapped layers are currently not supported
	[ +14.381354] overlayfs: idmapped layers are currently not supported
	[Oct18 11:52] overlayfs: idmapped layers are currently not supported
	[Oct18 11:53] overlayfs: idmapped layers are currently not supported
	[Oct18 11:54] overlayfs: idmapped layers are currently not supported
	[Oct18 11:55] overlayfs: idmapped layers are currently not supported
	[ +48.139503] overlayfs: idmapped layers are currently not supported
	[Oct18 11:56] overlayfs: idmapped layers are currently not supported
	[Oct18 11:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:00] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [58853751465a3efd712f43b2c9771ddf1a3712ec835251f8706bfe938a29238e] <==
	{"level":"warn","ts":"2025-10-18T12:14:59.452608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.463279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.486582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.504041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.532486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.553516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.573753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.592956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.607779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.623927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.639139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.665208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.677553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.707230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.716944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.728735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.764609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.776003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.791680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.809712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.825063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.857722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.874024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.890213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:14:59.996462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46900","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:15:21 up 13:57,  0 user,  load average: 1.12, 0.81, 1.68
	Linux dockerenv-126335 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [33347e8d4cab09ae0e67c5e936aa5deec0b4166dcac2712ebe7310e038d46e63] <==
	I1018 12:15:09.619735       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:15:09.708106       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 12:15:09.708429       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:15:09.708611       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:15:09.708638       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:15:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:15:09.911545       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:15:09.911743       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:15:09.911885       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:15:09.912738       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 12:15:10.112326       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:15:10.112542       1 metrics.go:72] Registering metrics
	I1018 12:15:10.112748       1 controller.go:711] "Syncing nftables rules"
	I1018 12:15:19.912287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:15:19.912330       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2baf836b8658d262cbd7c5a711af0ea587f0f95d0ebb1bdec16cb83cb55db965] <==
	I1018 12:15:01.276459       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:15:01.276592       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1018 12:15:01.289152       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:15:01.289472       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:15:01.302255       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:15:01.302525       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:15:01.302570       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 12:15:01.452532       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 12:15:01.865190       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 12:15:01.870901       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 12:15:01.870931       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 12:15:02.616715       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:15:02.673775       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:15:02.770935       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 12:15:02.779562       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 12:15:02.781028       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:15:02.788829       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:15:02.953049       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:15:03.683272       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:15:03.700092       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 12:15:03.712778       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:15:08.352093       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:15:08.755579       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 12:15:08.823662       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:15:08.840748       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [b71a0287cfe99570ddb2ea8da9e987a2b7aeb49e4de5466117a8a66c8436b3db] <==
	I1018 12:15:07.958070       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:15:07.964658       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="dockerenv-126335" podCIDRs=["10.244.0.0/24"]
	I1018 12:15:07.984443       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 12:15:07.991872       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:15:07.994280       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 12:15:07.994454       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 12:15:07.994776       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:15:07.995563       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:15:07.995683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:15:07.995778       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:15:07.995872       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:15:07.995723       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:15:07.996195       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="dockerenv-126335"
	I1018 12:15:07.996321       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 12:15:07.996476       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 12:15:07.996772       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 12:15:07.996922       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 12:15:07.997044       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:15:07.997169       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:15:07.997925       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:15:07.998056       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:15:07.999043       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 12:15:08.000420       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 12:15:08.001544       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:15:08.005218       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-proxy [c76664a54c46bd57e0c4e759e5604586d555b72215171142bd3dc0567a280c00] <==
	I1018 12:15:09.403193       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:15:09.477433       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:15:09.577688       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:15:09.577727       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:15:09.577819       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:15:09.596774       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:15:09.596989       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:15:09.601283       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:15:09.601779       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:15:09.601805       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:15:09.606410       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:15:09.606587       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:15:09.607018       1 config.go:200] "Starting service config controller"
	I1018 12:15:09.607116       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:15:09.607800       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:15:09.609878       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:15:09.620593       1 config.go:309] "Starting node config controller"
	I1018 12:15:09.621309       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:15:09.621438       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:15:09.707328       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 12:15:09.708463       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:15:09.713894       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [617570fe118423fbb1bcba25a2f6941d620139e4dae4cc6cf618e6f6a11c3cb7] <==
	I1018 12:15:01.658158       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:15:01.658200       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:15:01.659165       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 12:15:01.664016       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:15:01.675342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:15:01.689147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:15:01.689380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:15:01.689510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:15:01.689684       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:15:01.689804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:15:01.689979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:15:01.690132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:15:01.690275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:15:01.690721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:15:01.696518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:15:01.696779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:15:01.696862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:15:01.696915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:15:01.696964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:15:01.701969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:15:01.702306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:15:01.702401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:15:01.702474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:15:02.652862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 12:15:05.258807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 12:15:04 dockerenv-126335 kubelet[1460]: E1018 12:15:04.768759    1460 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-dockerenv-126335\" already exists" pod="kube-system/etcd-dockerenv-126335"
	Oct 18 12:15:04 dockerenv-126335 kubelet[1460]: E1018 12:15:04.771231    1460 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-dockerenv-126335\" already exists" pod="kube-system/kube-apiserver-dockerenv-126335"
	Oct 18 12:15:04 dockerenv-126335 kubelet[1460]: I1018 12:15:04.781376    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-dockerenv-126335" podStartSLOduration=1.7812330570000001 podStartE2EDuration="1.781233057s" podCreationTimestamp="2025-10-18 12:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:15:04.781151164 +0000 UTC m=+1.260985241" watchObservedRunningTime="2025-10-18 12:15:04.781233057 +0000 UTC m=+1.261067118"
	Oct 18 12:15:04 dockerenv-126335 kubelet[1460]: I1018 12:15:04.832623    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-dockerenv-126335" podStartSLOduration=0.83260441 podStartE2EDuration="832.60441ms" podCreationTimestamp="2025-10-18 12:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:15:04.808143387 +0000 UTC m=+1.287977456" watchObservedRunningTime="2025-10-18 12:15:04.83260441 +0000 UTC m=+1.312438471"
	Oct 18 12:15:04 dockerenv-126335 kubelet[1460]: I1018 12:15:04.855638    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-dockerenv-126335" podStartSLOduration=1.855621846 podStartE2EDuration="1.855621846s" podCreationTimestamp="2025-10-18 12:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:15:04.855228642 +0000 UTC m=+1.335062735" watchObservedRunningTime="2025-10-18 12:15:04.855621846 +0000 UTC m=+1.335455907"
	Oct 18 12:15:04 dockerenv-126335 kubelet[1460]: I1018 12:15:04.855745    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-dockerenv-126335" podStartSLOduration=0.855738257 podStartE2EDuration="855.738257ms" podCreationTimestamp="2025-10-18 12:15:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:15:04.832990632 +0000 UTC m=+1.312824717" watchObservedRunningTime="2025-10-18 12:15:04.855738257 +0000 UTC m=+1.335572342"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.044450    1460 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.045785    1460 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959368    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b41f931c-6012-4e51-b7db-d335e572f24f-kube-proxy\") pod \"kube-proxy-spvqq\" (UID: \"b41f931c-6012-4e51-b7db-d335e572f24f\") " pod="kube-system/kube-proxy-spvqq"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959424    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b41f931c-6012-4e51-b7db-d335e572f24f-xtables-lock\") pod \"kube-proxy-spvqq\" (UID: \"b41f931c-6012-4e51-b7db-d335e572f24f\") " pod="kube-system/kube-proxy-spvqq"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959443    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b41f931c-6012-4e51-b7db-d335e572f24f-lib-modules\") pod \"kube-proxy-spvqq\" (UID: \"b41f931c-6012-4e51-b7db-d335e572f24f\") " pod="kube-system/kube-proxy-spvqq"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959463    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e65a5297-a110-4217-9feb-53737a0ea6e0-cni-cfg\") pod \"kindnet-2fgrz\" (UID: \"e65a5297-a110-4217-9feb-53737a0ea6e0\") " pod="kube-system/kindnet-2fgrz"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959483    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e65a5297-a110-4217-9feb-53737a0ea6e0-lib-modules\") pod \"kindnet-2fgrz\" (UID: \"e65a5297-a110-4217-9feb-53737a0ea6e0\") " pod="kube-system/kindnet-2fgrz"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959502    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq49c\" (UniqueName: \"kubernetes.io/projected/b41f931c-6012-4e51-b7db-d335e572f24f-kube-api-access-wq49c\") pod \"kube-proxy-spvqq\" (UID: \"b41f931c-6012-4e51-b7db-d335e572f24f\") " pod="kube-system/kube-proxy-spvqq"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959521    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e65a5297-a110-4217-9feb-53737a0ea6e0-xtables-lock\") pod \"kindnet-2fgrz\" (UID: \"e65a5297-a110-4217-9feb-53737a0ea6e0\") " pod="kube-system/kindnet-2fgrz"
	Oct 18 12:15:08 dockerenv-126335 kubelet[1460]: I1018 12:15:08.959537    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhb8t\" (UniqueName: \"kubernetes.io/projected/e65a5297-a110-4217-9feb-53737a0ea6e0-kube-api-access-qhb8t\") pod \"kindnet-2fgrz\" (UID: \"e65a5297-a110-4217-9feb-53737a0ea6e0\") " pod="kube-system/kindnet-2fgrz"
	Oct 18 12:15:09 dockerenv-126335 kubelet[1460]: I1018 12:15:09.076141    1460 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 12:15:09 dockerenv-126335 kubelet[1460]: I1018 12:15:09.792392    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2fgrz" podStartSLOduration=1.792371335 podStartE2EDuration="1.792371335s" podCreationTimestamp="2025-10-18 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:15:09.776359951 +0000 UTC m=+6.256194020" watchObservedRunningTime="2025-10-18 12:15:09.792371335 +0000 UTC m=+6.272205396"
	Oct 18 12:15:10 dockerenv-126335 kubelet[1460]: I1018 12:15:10.980078    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-spvqq" podStartSLOduration=2.9800580979999998 podStartE2EDuration="2.980058098s" podCreationTimestamp="2025-10-18 12:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:15:09.793549511 +0000 UTC m=+6.273383596" watchObservedRunningTime="2025-10-18 12:15:10.980058098 +0000 UTC m=+7.459892167"
	Oct 18 12:15:20 dockerenv-126335 kubelet[1460]: I1018 12:15:20.006103    1460 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 12:15:20 dockerenv-126335 kubelet[1460]: I1018 12:15:20.243887    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46wxz\" (UniqueName: \"kubernetes.io/projected/72855eaa-af4f-4fc1-84ea-42839eab88d6-kube-api-access-46wxz\") pod \"storage-provisioner\" (UID: \"72855eaa-af4f-4fc1-84ea-42839eab88d6\") " pod="kube-system/storage-provisioner"
	Oct 18 12:15:20 dockerenv-126335 kubelet[1460]: I1018 12:15:20.243961    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnbrj\" (UniqueName: \"kubernetes.io/projected/bb1a4173-f449-42df-bac5-545b292e0d0a-kube-api-access-nnbrj\") pod \"coredns-66bc5c9577-lfpv2\" (UID: \"bb1a4173-f449-42df-bac5-545b292e0d0a\") " pod="kube-system/coredns-66bc5c9577-lfpv2"
	Oct 18 12:15:20 dockerenv-126335 kubelet[1460]: I1018 12:15:20.244012    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb1a4173-f449-42df-bac5-545b292e0d0a-config-volume\") pod \"coredns-66bc5c9577-lfpv2\" (UID: \"bb1a4173-f449-42df-bac5-545b292e0d0a\") " pod="kube-system/coredns-66bc5c9577-lfpv2"
	Oct 18 12:15:20 dockerenv-126335 kubelet[1460]: I1018 12:15:20.244061    1460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/72855eaa-af4f-4fc1-84ea-42839eab88d6-tmp\") pod \"storage-provisioner\" (UID: \"72855eaa-af4f-4fc1-84ea-42839eab88d6\") " pod="kube-system/storage-provisioner"
	Oct 18 12:15:20 dockerenv-126335 kubelet[1460]: I1018 12:15:20.875085    1460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.8750645 podStartE2EDuration="15.8750645s" podCreationTimestamp="2025-10-18 12:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 12:15:20.824007946 +0000 UTC m=+17.303842031" watchObservedRunningTime="2025-10-18 12:15:20.8750645 +0000 UTC m=+17.354898569"
	
	
	==> storage-provisioner [2eb608a42ce17ae7b66b0d4860c4161e38a9982063e18f7d1f4eea765cebd7da] <==
	I1018 12:15:20.710070       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 12:15:20.741956       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 12:15:20.742280       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 12:15:20.756342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:15:20.768591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:15:20.832304       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 12:15:20.833078       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32398ce1-1506-4801-8501-d0a276604e88", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' dockerenv-126335_f2b010bf-2cc8-414d-93a2-e55f1fb7e582 became leader
	I1018 12:15:20.833347       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_dockerenv-126335_f2b010bf-2cc8-414d-93a2-e55f1fb7e582!
	W1018 12:15:20.882582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:15:20.888886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 12:15:20.937389       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_dockerenv-126335_f2b010bf-2cc8-414d-93a2-e55f1fb7e582!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p dockerenv-126335 -n dockerenv-126335
helpers_test.go:269: (dbg) Run:  kubectl --context dockerenv-126335 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestDockerEnvContainerd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "dockerenv-126335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-126335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-126335: (2.386028973s)
--- FAIL: TestDockerEnvContainerd (48.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-955523 --alsologtostderr -v=1]
E1018 12:33:46.997374 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-955523 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-955523 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-955523 --alsologtostderr -v=1] stderr:
I1018 12:29:00.571129 2121168 out.go:360] Setting OutFile to fd 1 ...
I1018 12:29:00.573530 2121168 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:29:00.573571 2121168 out.go:374] Setting ErrFile to fd 2...
I1018 12:29:00.573587 2121168 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:29:00.574036 2121168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
I1018 12:29:00.574527 2121168 mustload.go:65] Loading cluster: functional-955523
I1018 12:29:00.575242 2121168 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:29:00.575965 2121168 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
I1018 12:29:00.602785 2121168 host.go:66] Checking if "functional-955523" exists ...
I1018 12:29:00.603824 2121168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1018 12:29:00.665541 2121168 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:29:00.656577733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1018 12:29:00.665651 2121168 api_server.go:166] Checking apiserver status ...
I1018 12:29:00.665725 2121168 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1018 12:29:00.665769 2121168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
I1018 12:29:00.683185 2121168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
I1018 12:29:00.792475 2121168 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4785/cgroup
I1018 12:29:00.800565 2121168 api_server.go:182] apiserver freezer: "9:freezer:/docker/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/kubepods/burstable/poda1a0e5d241771bc7495ed9e034cb022e/5b3b426b0241c3bc68a439120feb2f099fa5671ef78cf372487f7863c3e46bb6"
I1018 12:29:00.800654 2121168 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/kubepods/burstable/poda1a0e5d241771bc7495ed9e034cb022e/5b3b426b0241c3bc68a439120feb2f099fa5671ef78cf372487f7863c3e46bb6/freezer.state
I1018 12:29:00.808208 2121168 api_server.go:204] freezer state: "THAWED"
I1018 12:29:00.808238 2121168 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I1018 12:29:00.816664 2121168 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W1018 12:29:00.816724 2121168 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1018 12:29:00.816934 2121168 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:29:00.816969 2121168 addons.go:69] Setting dashboard=true in profile "functional-955523"
I1018 12:29:00.816987 2121168 addons.go:238] Setting addon dashboard=true in "functional-955523"
I1018 12:29:00.817016 2121168 host.go:66] Checking if "functional-955523" exists ...
I1018 12:29:00.817495 2121168 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
I1018 12:29:00.837762 2121168 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1018 12:29:00.840574 2121168 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1018 12:29:00.843346 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1018 12:29:00.843367 2121168 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1018 12:29:00.843459 2121168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
I1018 12:29:00.860458 2121168 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
I1018 12:29:00.968858 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1018 12:29:00.968891 2121168 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1018 12:29:00.981714 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1018 12:29:00.981736 2121168 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1018 12:29:00.995300 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1018 12:29:00.995326 2121168 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1018 12:29:01.009952 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1018 12:29:01.009976 2121168 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1018 12:29:01.026952 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I1018 12:29:01.026978 2121168 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1018 12:29:01.040459 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1018 12:29:01.040482 2121168 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1018 12:29:01.053319 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1018 12:29:01.053347 2121168 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1018 12:29:01.066654 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1018 12:29:01.066680 2121168 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1018 12:29:01.080302 2121168 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1018 12:29:01.080352 2121168 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1018 12:29:01.094714 2121168 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1018 12:29:01.925218 2121168 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-955523 addons enable metrics-server

                                                
                                                
I1018 12:29:01.928112 2121168 addons.go:201] Writing out "functional-955523" config to set dashboard=true...
W1018 12:29:01.928437 2121168 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1018 12:29:01.929113 2121168 kapi.go:59] client config for functional-955523: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt", KeyFile:"/home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.key", CAFile:"/home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1018 12:29:01.929680 2121168 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1018 12:29:01.929716 2121168 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1018 12:29:01.929734 2121168 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1018 12:29:01.929749 2121168 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1018 12:29:01.929764 2121168 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1018 12:29:01.946883 2121168 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  af279e55-053d-4260-90c3-c82f9c322b27 1413 0 2025-10-18 12:29:01 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-10-18 12:29:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.96.82.92,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.82.92],IPFamilies:[IPv4],AllocateLoadBalancerNod
ePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1018 12:29:01.947086 2121168 out.go:285] * Launching proxy ...
* Launching proxy ...
I1018 12:29:01.947211 2121168 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-955523 proxy --port 36195]
I1018 12:29:01.947499 2121168 dashboard.go:157] Waiting for kubectl to output host:port ...
I1018 12:29:02.022827 2121168 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W1018 12:29:02.022876 2121168 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1018 12:29:02.058326 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[149950b7-cf05-4b64-80c6-1dc6b33cb935] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000392c80 TLS:<nil>}
I1018 12:29:02.058420 2121168 retry.go:31] will retry after 147.73µs: Temporary Error: unexpected response code: 503
I1018 12:29:02.066869 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[38ae9127-ca72-4513-8197-ed1de14b0175] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b31c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2c80 TLS:<nil>}
I1018 12:29:02.066960 2121168 retry.go:31] will retry after 165.203µs: Temporary Error: unexpected response code: 503
I1018 12:29:02.081597 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[fbc545dc-9416-4234-a37e-8948157d890a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2dc0 TLS:<nil>}
I1018 12:29:02.081675 2121168 retry.go:31] will retry after 134.154µs: Temporary Error: unexpected response code: 503
I1018 12:29:02.088757 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e63e93b6-a40b-48fc-bac5-1bc486890e82] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40001b2f00 TLS:<nil>}
I1018 12:29:02.088827 2121168 retry.go:31] will retry after 188.298µs: Temporary Error: unexpected response code: 503
I1018 12:29:02.102931 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[63b609a7-4179-413f-87e9-bea89087807f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043e140 TLS:<nil>}
I1018 12:29:02.103000 2121168 retry.go:31] will retry after 630.728µs: Temporary Error: unexpected response code: 503
I1018 12:29:02.107930 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3296d977-06b3-4726-82d9-40b63122d6f8] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043e280 TLS:<nil>}
I1018 12:29:02.108017 2121168 retry.go:31] will retry after 874.779µs: Temporary Error: unexpected response code: 503
I1018 12:29:02.114232 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7515ede2-8037-457d-96c4-a83fc9a17e20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8400 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000392dc0 TLS:<nil>}
I1018 12:29:02.114300 2121168 retry.go:31] will retry after 960.833µs: Temporary Error: unexpected response code: 503
I1018 12:29:02.118734 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ecfe18e1-108e-421b-b01d-98c0602e7cdf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000392f00 TLS:<nil>}
I1018 12:29:02.118799 2121168 retry.go:31] will retry after 1.920942ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.125292 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ca929e8a-1ee6-442d-b891-fbfee5fb8c9d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8500 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393040 TLS:<nil>}
I1018 12:29:02.125367 2121168 retry.go:31] will retry after 2.836593ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.131592 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3fa1e515-722c-45ef-baaf-a66c4e3ec651] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8580 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393180 TLS:<nil>}
I1018 12:29:02.131654 2121168 retry.go:31] will retry after 4.335706ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.139806 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4b0e9689-b9a4-4263-89ad-d18d37ec9679] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043e3c0 TLS:<nil>}
I1018 12:29:02.139930 2121168 retry.go:31] will retry after 7.259089ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.150974 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7e3da707-c325-4e3d-8b0a-badce3febc8d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003932c0 TLS:<nil>}
I1018 12:29:02.151061 2121168 retry.go:31] will retry after 11.751041ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.166127 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[18126ecf-6894-41d7-85c4-24b41d6353fd] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3bc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393400 TLS:<nil>}
I1018 12:29:02.166189 2121168 retry.go:31] will retry after 13.021751ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.182505 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4ff7bee2-0a97-4878-9680-047f90f16d64] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8740 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393540 TLS:<nil>}
I1018 12:29:02.182574 2121168 retry.go:31] will retry after 11.835149ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.197645 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[75cd282a-a054-4d61-8714-19c4fdf91bdb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043e500 TLS:<nil>}
I1018 12:29:02.197711 2121168 retry.go:31] will retry after 37.89078ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.238961 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a2f03607-a266-475b-b1ed-8e9aa0762d40] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3d80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043e640 TLS:<nil>}
I1018 12:29:02.239033 2121168 retry.go:31] will retry after 38.086184ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.280316 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[988f7b3a-fdc9-4586-8850-0998e08ce99d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393680 TLS:<nil>}
I1018 12:29:02.280386 2121168 retry.go:31] will retry after 74.822149ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.358722 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2bf25d8a-5467-487c-9100-204638471e0d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40007b3ec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043e780 TLS:<nil>}
I1018 12:29:02.358803 2121168 retry.go:31] will retry after 100.973243ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.462920 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5777240b-2b93-452a-b9d6-e84e41de1a08] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x4001676000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40003937c0 TLS:<nil>}
I1018 12:29:02.462991 2121168 retry.go:31] will retry after 185.752594ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.652459 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d370807c-e5c1-4493-896e-128bf7edc533] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8b00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393a40 TLS:<nil>}
I1018 12:29:02.652572 2121168 retry.go:31] will retry after 310.8828ms: Temporary Error: unexpected response code: 503
I1018 12:29:02.968912 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0e0c916a-aff6-4928-9ab5-cf260be6b014] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:02 GMT]] Body:0x40015b8b80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393b80 TLS:<nil>}
I1018 12:29:02.969008 2121168 retry.go:31] will retry after 492.424826ms: Temporary Error: unexpected response code: 503
I1018 12:29:03.464742 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[21daf083-3a7f-45b0-a487-3012dafe653a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:03 GMT]] Body:0x4001676140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043e8c0 TLS:<nil>}
I1018 12:29:03.464808 2121168 retry.go:31] will retry after 640.013316ms: Temporary Error: unexpected response code: 503
I1018 12:29:04.109318 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1f9a350e-a42a-4f9f-bdea-b4ae57508fe3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:04 GMT]] Body:0x4001676240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393cc0 TLS:<nil>}
I1018 12:29:04.109389 2121168 retry.go:31] will retry after 1.097195613s: Temporary Error: unexpected response code: 503
I1018 12:29:05.209819 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[35092fd3-a1db-49bf-bdde-53b909189105] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:05 GMT]] Body:0x40016762c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043ea00 TLS:<nil>}
I1018 12:29:05.209906 2121168 retry.go:31] will retry after 783.586447ms: Temporary Error: unexpected response code: 503
I1018 12:29:05.996752 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[228ac21d-0876-42ad-9ba2-a9d9bf992ea7] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:05 GMT]] Body:0x40015b8d00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000393e00 TLS:<nil>}
I1018 12:29:05.996813 2121168 retry.go:31] will retry after 1.876372407s: Temporary Error: unexpected response code: 503
I1018 12:29:07.877489 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[100e8334-3199-4fff-b139-17702376c698] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:07 GMT]] Body:0x40016763c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043eb40 TLS:<nil>}
I1018 12:29:07.877552 2121168 retry.go:31] will retry after 2.913886532s: Temporary Error: unexpected response code: 503
I1018 12:29:10.795052 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8866cc60-b3c4-4275-8094-ffdd7546cfc6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:10 GMT]] Body:0x40015b8e00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472000 TLS:<nil>}
I1018 12:29:10.795115 2121168 retry.go:31] will retry after 4.502747372s: Temporary Error: unexpected response code: 503
I1018 12:29:15.303756 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bc2f6e0c-2313-4363-8b35-5aa1ade4ec4e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:15 GMT]] Body:0x40016764c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043ec80 TLS:<nil>}
I1018 12:29:15.303863 2121168 retry.go:31] will retry after 7.157472925s: Temporary Error: unexpected response code: 503
I1018 12:29:22.464441 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5375f87b-b3ba-49bd-b626-b60db7418a24] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:22 GMT]] Body:0x4001676540 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043edc0 TLS:<nil>}
I1018 12:29:22.464502 2121168 retry.go:31] will retry after 4.459107704s: Temporary Error: unexpected response code: 503
I1018 12:29:26.926525 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ab801d7c-fccd-4c59-865e-0d6c6add5efa] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:26 GMT]] Body:0x40015b8fc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472140 TLS:<nil>}
I1018 12:29:26.926584 2121168 retry.go:31] will retry after 9.751224858s: Temporary Error: unexpected response code: 503
I1018 12:29:36.681368 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf7a3b51-d788-4100-81f5-29805d9dfb88] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:36 GMT]] Body:0x4001676640 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472280 TLS:<nil>}
I1018 12:29:36.681429 2121168 retry.go:31] will retry after 14.18088601s: Temporary Error: unexpected response code: 503
I1018 12:29:50.866306 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b986e588-0c14-42d9-b183-b174c4f5e249] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:29:50 GMT]] Body:0x4001676700 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400043ef00 TLS:<nil>}
I1018 12:29:50.866366 2121168 retry.go:31] will retry after 40.874719426s: Temporary Error: unexpected response code: 503
I1018 12:30:31.744351 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d36bd13-eeec-491a-b24f-cfa377653c2d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:30:31 GMT]] Body:0x40016767c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472640 TLS:<nil>}
I1018 12:30:31.744409 2121168 retry.go:31] will retry after 24.091747799s: Temporary Error: unexpected response code: 503
I1018 12:30:55.839516 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a7ac927e-e6d9-477b-82c1-1117a10dd136] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:30:55 GMT]] Body:0x4001676880 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472780 TLS:<nil>}
I1018 12:30:55.839584 2121168 retry.go:31] will retry after 55.066861593s: Temporary Error: unexpected response code: 503
I1018 12:31:50.909469 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8896ed80-2435-4931-9dfd-3dd9f24c3c81] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:31:50 GMT]] Body:0x4001676100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004728c0 TLS:<nil>}
I1018 12:31:50.909537 2121168 retry.go:31] will retry after 42.582064822s: Temporary Error: unexpected response code: 503
I1018 12:32:33.494599 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cd7ec95a-6b10-4117-a419-bdca8256d8c9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:32:33 GMT]] Body:0x4001676200 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472a00 TLS:<nil>}
I1018 12:32:33.494663 2121168 retry.go:31] will retry after 1m20.06447582s: Temporary Error: unexpected response code: 503
I1018 12:33:53.563387 2121168 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a78a69c0-25ec-4999-948b-b569f44846f6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 18 Oct 2025 12:33:53 GMT]] Body:0x40015b80c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000472dc0 TLS:<nil>}
I1018 12:33:53.563455 2121168 retry.go:31] will retry after 1m5.084695621s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-955523
helpers_test.go:243: (dbg) docker inspect functional-955523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c",
	        "Created": "2025-10-18T12:16:14.008246334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2107283,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:14.069510242Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/hosts",
	        "LogPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c-json.log",
	        "Name": "/functional-955523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-955523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-955523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c",
	                "LowerDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8-init/diff:/var/lib/docker/overlay2/647b2423f8222638985dff90791465004ec84c7fd61ca3830bba92bce09f80ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-955523",
	                "Source": "/var/lib/docker/volumes/functional-955523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-955523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-955523",
	                "name.minikube.sigs.k8s.io": "functional-955523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adfe4b4bccae20f103c88e75ee04efa9395565011b987ceb79a51e3a57d55dca",
	            "SandboxKey": "/var/run/docker/netns/adfe4b4bccae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35709"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35710"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35713"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35711"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35712"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-955523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:d4:c2:3f:ec:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "42cfb9c176848d2ffeccdf17874138cf42d5bcd8128808bcdc9dac0a8534a110",
	                    "EndpointID": "c0b036060bcdeb0e9c3fb4f11cc997807f67c06fafda01adb14cd2a83f1d025d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-955523",
	                        "e31280ad3c62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-955523 -n functional-955523
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 logs -n 25: (1.522676925s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdany-port749775523/001:/mount-9p --alsologtostderr -v=1                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh       │ functional-955523 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh       │ functional-955523 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh -- ls -la /mount-9p                                                                                        │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh cat /mount-9p/test-1760790528268537598                                                                     │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh stat /mount-9p/created-by-test                                                                             │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh stat /mount-9p/created-by-pod                                                                              │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh sudo umount -f /mount-9p                                                                                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ mount     │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdspecific-port817102592/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh       │ functional-955523 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh       │ functional-955523 ssh findmnt -T /mount-9p | grep 9p                                                                             │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh -- ls -la /mount-9p                                                                                        │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh sudo umount -f /mount-9p                                                                                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ mount     │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount1 --alsologtostderr -v=1               │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ mount     │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount2 --alsologtostderr -v=1               │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ mount     │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount3 --alsologtostderr -v=1               │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh       │ functional-955523 ssh findmnt -T /mount1                                                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh       │ functional-955523 ssh findmnt -T /mount1                                                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh findmnt -T /mount2                                                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh       │ functional-955523 ssh findmnt -T /mount3                                                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ mount     │ -p functional-955523 --kill=true                                                                                                 │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ start     │ -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                  │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ start     │ -p functional-955523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                            │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ start     │ -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd                  │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:29 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-955523 --alsologtostderr -v=1                                                                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:29 UTC │                     │
	└───────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:29:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:29:00.371311 2121122 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:29:00.371573 2121122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:00.371611 2121122 out.go:374] Setting ErrFile to fd 2...
	I1018 12:29:00.371634 2121122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:00.373612 2121122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:29:00.374353 2121122 out.go:368] Setting JSON to false
	I1018 12:29:00.375500 2121122 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51088,"bootTime":1760739453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:29:00.375628 2121122 start.go:141] virtualization:  
	I1018 12:29:00.380667 2121122 out.go:179] * [functional-955523] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 12:29:00.387237 2121122 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:29:00.387251 2121122 notify.go:220] Checking for updates...
	I1018 12:29:00.394014 2121122 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:29:00.396938 2121122 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:29:00.400032 2121122 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:29:00.403051 2121122 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:29:00.406164 2121122 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:29:00.409911 2121122 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:29:00.410613 2121122 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:29:00.440054 2121122 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:29:00.440215 2121122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:29:00.503497 2121122 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:29:00.49269586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:29:00.503625 2121122 docker.go:318] overlay module found
	I1018 12:29:00.506801 2121122 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 12:29:00.509581 2121122 start.go:305] selected driver: docker
	I1018 12:29:00.509606 2121122 start.go:925] validating driver "docker" against &{Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:29:00.509727 2121122 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:29:00.513382 2121122 out.go:203] 
	W1018 12:29:00.516345 2121122 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 12:29:00.519132 2121122 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20f85e75cdfd2       1611cd07b61d5       5 minutes ago       Exited              mount-munger              0                   14f4f12d6989b       busybox-mount                               default
	e7653bc9c7bce       ba04bb24b9575       15 minutes ago      Running             storage-provisioner       2                   8fd44ed1df230       storage-provisioner                         kube-system
	5b3b426b0241c       43911e833d64d       15 minutes ago      Running             kube-apiserver            0                   94a8c004f82cd       kube-apiserver-functional-955523            kube-system
	a0c96f46d08ea       7eb2c6ff0c5a7       15 minutes ago      Running             kube-controller-manager   2                   0a85e91787857       kube-controller-manager-functional-955523   kube-system
	3d6e189351ed9       a1894772a478e       15 minutes ago      Running             etcd                      1                   831fb163c52b3       etcd-functional-955523                      kube-system
	fe29506973495       ba04bb24b9575       16 minutes ago      Exited              storage-provisioner       1                   8fd44ed1df230       storage-provisioner                         kube-system
	9768e1a243da3       b1a8c6f707935       16 minutes ago      Running             kindnet-cni               1                   39f7db02a4760       kindnet-g62kl                               kube-system
	1fee019c9744e       05baa95f5142d       16 minutes ago      Running             kube-proxy                1                   76b1c91847749       kube-proxy-wp97m                            kube-system
	49bcc42178cf4       7eb2c6ff0c5a7       16 minutes ago      Exited              kube-controller-manager   1                   0a85e91787857       kube-controller-manager-functional-955523   kube-system
	42cfd21f04099       b5f57ec6b9867       16 minutes ago      Running             kube-scheduler            1                   5a40d8c62b89c       kube-scheduler-functional-955523            kube-system
	482c932303b97       138784d87c9c5       16 minutes ago      Running             coredns                   1                   838d864c0cd20       coredns-66bc5c9577-jfd97                    kube-system
	176afc34450ef       138784d87c9c5       16 minutes ago      Exited              coredns                   0                   838d864c0cd20       coredns-66bc5c9577-jfd97                    kube-system
	65c34c830786f       05baa95f5142d       17 minutes ago      Exited              kube-proxy                0                   76b1c91847749       kube-proxy-wp97m                            kube-system
	d6f640024b52e       b1a8c6f707935       17 minutes ago      Exited              kindnet-cni               0                   39f7db02a4760       kindnet-g62kl                               kube-system
	ffdc1092e749b       b5f57ec6b9867       17 minutes ago      Exited              kube-scheduler            0                   5a40d8c62b89c       kube-scheduler-functional-955523            kube-system
	091437cb53c82       a1894772a478e       17 minutes ago      Exited              etcd                      0                   831fb163c52b3       etcd-functional-955523                      kube-system
	
	
	==> containerd <==
	Oct 18 12:30:24 functional-955523 containerd[3606]: time="2025-10-18T12:30:24.689847600Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 18 12:30:24 functional-955523 containerd[3606]: time="2025-10-18T12:30:24.692183831Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:30:24 functional-955523 containerd[3606]: time="2025-10-18T12:30:24.814840599Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:30:25 functional-955523 containerd[3606]: time="2025-10-18T12:30:25.121481899Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:30:25 functional-955523 containerd[3606]: time="2025-10-18T12:30:25.121847985Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 18 12:30:32 functional-955523 containerd[3606]: time="2025-10-18T12:30:32.690301765Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 18 12:30:32 functional-955523 containerd[3606]: time="2025-10-18T12:30:32.692680482Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:30:32 functional-955523 containerd[3606]: time="2025-10-18T12:30:32.832547627Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:30:33 functional-955523 containerd[3606]: time="2025-10-18T12:30:33.111743430Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:30:33 functional-955523 containerd[3606]: time="2025-10-18T12:30:33.111869507Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 18 12:31:39 functional-955523 containerd[3606]: time="2025-10-18T12:31:39.689411687Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 18 12:31:39 functional-955523 containerd[3606]: time="2025-10-18T12:31:39.691944129Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:31:39 functional-955523 containerd[3606]: time="2025-10-18T12:31:39.824439468Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:31:40 functional-955523 containerd[3606]: time="2025-10-18T12:31:40.222626560Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:31:40 functional-955523 containerd[3606]: time="2025-10-18T12:31:40.222681533Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=11740"
	Oct 18 12:31:47 functional-955523 containerd[3606]: time="2025-10-18T12:31:47.690219829Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 18 12:31:47 functional-955523 containerd[3606]: time="2025-10-18T12:31:47.692666235Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:31:47 functional-955523 containerd[3606]: time="2025-10-18T12:31:47.818216678Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:31:48 functional-955523 containerd[3606]: time="2025-10-18T12:31:48.120005875Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:31:48 functional-955523 containerd[3606]: time="2025-10-18T12:31:48.120135586Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 18 12:31:59 functional-955523 containerd[3606]: time="2025-10-18T12:31:59.689676469Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 18 12:31:59 functional-955523 containerd[3606]: time="2025-10-18T12:31:59.692034992Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:31:59 functional-955523 containerd[3606]: time="2025-10-18T12:31:59.820678248Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:32:00 functional-955523 containerd[3606]: time="2025-10-18T12:32:00.175463281Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:32:00 functional-955523 containerd[3606]: time="2025-10-18T12:32:00.175598539Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	
	
	==> coredns [176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55783 - 13646 "HINFO IN 2965834670196057044.7525666574534054471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02149907s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [482c932303b97dc57ee5c86e642c514752840fdd30f0d7f8d0538d0e0ef2de95] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50117 - 8560 "HINFO IN 9133458471581067057.4151858571828751761. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03122877s
	
	
	==> describe nodes <==
	Name:               functional-955523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-955523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=functional-955523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-955523
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:33:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:29:15 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:29:15 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:29:15 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:29:15 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-955523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                68d1fac0-3a30-4775-ac10-1725872276da
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sgbjm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     hello-node-connect-7d85dfc575-486zs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 coredns-66bc5c9577-jfd97                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     17m
	  kube-system                 etcd-functional-955523                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         17m
	  kube-system                 kindnet-g62kl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      17m
	  kube-system                 kube-apiserver-functional-955523              250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-functional-955523     200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-wp97m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-functional-955523              100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-zxtmh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-htgrk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17m                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 17m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17m                kubelet          Node functional-955523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17m                kubelet          Node functional-955523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17m                kubelet          Node functional-955523 status is now: NodeHasSufficientPID
	  Normal   Starting                 17m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           17m                node-controller  Node functional-955523 event: Registered Node functional-955523 in Controller
	  Normal   NodeReady                16m                kubelet          Node functional-955523 status is now: NodeReady
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node functional-955523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node functional-955523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node functional-955523 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                node-controller  Node functional-955523 event: Registered Node functional-955523 in Controller
	
	
	==> dmesg <==
	[Oct18 11:37] overlayfs: idmapped layers are currently not supported
	[Oct18 11:38] overlayfs: idmapped layers are currently not supported
	[Oct18 11:40] overlayfs: idmapped layers are currently not supported
	[Oct18 11:42] overlayfs: idmapped layers are currently not supported
	[Oct18 11:43] overlayfs: idmapped layers are currently not supported
	[ +44.292171] overlayfs: idmapped layers are currently not supported
	[  +9.552091] overlayfs: idmapped layers are currently not supported
	[Oct18 11:44] overlayfs: idmapped layers are currently not supported
	[Oct18 11:45] overlayfs: idmapped layers are currently not supported
	[Oct18 11:47] overlayfs: idmapped layers are currently not supported
	[ +55.826989] overlayfs: idmapped layers are currently not supported
	[Oct18 11:48] overlayfs: idmapped layers are currently not supported
	[Oct18 11:49] overlayfs: idmapped layers are currently not supported
	[Oct18 11:50] overlayfs: idmapped layers are currently not supported
	[Oct18 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.885672] overlayfs: idmapped layers are currently not supported
	[ +14.381354] overlayfs: idmapped layers are currently not supported
	[Oct18 11:52] overlayfs: idmapped layers are currently not supported
	[Oct18 11:53] overlayfs: idmapped layers are currently not supported
	[Oct18 11:54] overlayfs: idmapped layers are currently not supported
	[Oct18 11:55] overlayfs: idmapped layers are currently not supported
	[ +48.139503] overlayfs: idmapped layers are currently not supported
	[Oct18 11:56] overlayfs: idmapped layers are currently not supported
	[Oct18 11:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:00] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576] <==
	{"level":"warn","ts":"2025-10-18T12:16:35.853241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.871922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.892443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.920122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.933591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.966622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:36.067935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36508","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:18:04.063823Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:18:04.064061Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-955523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T12:18:04.064301Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:18:04.065014Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:18:04.065054Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.065073Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T12:18:04.065153Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T12:18:04.065165Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065413Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065450Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:18:04.065458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065538Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065566Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:18:04.065575Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.068417Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T12:18:04.068507Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.068547Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T12:18:04.068554Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-955523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [3d6e189351ed915d09d07de968fbf97a1f10801b7148a5315c28032aa8ee2b6c] <==
	{"level":"warn","ts":"2025-10-18T12:18:11.899253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.913699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.942586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.971661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.988762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.011419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.023564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.040039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.056960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.072777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.088231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.104610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.118084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.133553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.155135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.182136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.197063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.211196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.286942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:28:10.805926Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":950}
	{"level":"info","ts":"2025-10-18T12:28:10.814525Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":950,"took":"8.333821ms","hash":3809581160,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2961408,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-10-18T12:28:10.814577Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3809581160,"revision":950,"compact-revision":-1}
	{"level":"info","ts":"2025-10-18T12:33:10.812573Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1269}
	{"level":"info","ts":"2025-10-18T12:33:10.816199Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1269,"took":"3.297672ms","hash":467796134,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2408448,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-10-18T12:33:10.816248Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":467796134,"revision":1269,"compact-revision":950}
	
	
	==> kernel <==
	 12:34:02 up 14:16,  0 user,  load average: 0.03, 0.27, 0.76
	Linux functional-955523 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9768e1a243da307a5e9d75450f025a8932218255b9e0c16d4a6eb1ad3271fff8] <==
	I1018 12:31:55.260822       1 main.go:301] handling current node
	I1018 12:32:05.261423       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:32:05.261459       1 main.go:301] handling current node
	I1018 12:32:15.261233       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:32:15.261333       1 main.go:301] handling current node
	I1018 12:32:25.260973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:32:25.261014       1 main.go:301] handling current node
	I1018 12:32:35.261428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:32:35.261467       1 main.go:301] handling current node
	I1018 12:32:45.268230       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:32:45.268272       1 main.go:301] handling current node
	I1018 12:32:55.266775       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:32:55.266811       1 main.go:301] handling current node
	I1018 12:33:05.261582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:33:05.261616       1 main.go:301] handling current node
	I1018 12:33:15.261412       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:33:15.261466       1 main.go:301] handling current node
	I1018 12:33:25.264187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:33:25.264222       1 main.go:301] handling current node
	I1018 12:33:35.262562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:33:35.262600       1 main.go:301] handling current node
	I1018 12:33:45.265041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:33:45.265084       1 main.go:301] handling current node
	I1018 12:33:55.261420       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:33:55.261459       1 main.go:301] handling current node
	
	
	==> kindnet [d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a] <==
	I1018 12:16:46.010069       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:16:46.010337       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 12:16:46.010476       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:16:46.010489       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:16:46.010503       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:16:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:16:46.212543       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:16:46.212714       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:16:46.212789       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:16:46.213680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 12:17:16.213129       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 12:17:16.213295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 12:17:16.213395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 12:17:16.214106       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 12:17:17.713809       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:17.713912       1 metrics.go:72] Registering metrics
	I1018 12:17:17.714060       1 controller.go:711] "Syncing nftables rules"
	I1018 12:17:26.215955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:26.215997       1 main.go:301] handling current node
	I1018 12:17:36.213540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:36.213644       1 main.go:301] handling current node
	I1018 12:17:46.216720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:46.216758       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5b3b426b0241c3bc68a439120feb2f099fa5671ef78cf372487f7863c3e46bb6] <==
	I1018 12:18:13.026661       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 12:18:13.027920       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:18:13.029678       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:18:13.029885       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:18:13.037447       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:18:13.029911       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:18:13.030025       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:18:13.759347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:18:13.807791       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 12:18:14.177814       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 12:18:14.179111       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:18:14.184235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:18:14.653695       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:18:14.818309       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:18:14.897779       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:18:14.906578       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:18:16.498535       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:18:28.340223       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.170.55"}
	I1018 12:18:37.935381       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.114.85"}
	I1018 12:18:42.802569       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.46.35"}
	I1018 12:28:12.945550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:28:44.011433       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.178.61"}
	I1018 12:29:01.585774       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:29:01.894459       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.82.92"}
	I1018 12:29:01.915158       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.165.85"}
	
	
	==> kube-controller-manager [49bcc42178cf4017980732207b69c732f49dbb0e1d3cb2a5b51aeda669460337] <==
	I1018 12:17:56.138096       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:17:58.110568       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:17:58.110607       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:58.112555       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:17:58.112780       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:17:58.113192       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:17:58.113348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:18:08.114886       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [a0c96f46d08ea5d6f2ae6eea6e32f62be57d0879bb28524e38800702fc8a9a34] <==
	I1018 12:18:16.339780       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:18:16.340046       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:18:16.340127       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:18:16.340446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:16.340466       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:18:16.340474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:18:16.342596       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:16.342636       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:18:16.342679       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:18:16.342742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:18:16.343729       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:18:16.351676       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:18:16.355886       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:18:16.356056       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:18:16.356211       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-955523"
	I1018 12:18:16.356318       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E1018 12:29:01.680981       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.702638       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.711515       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.714064       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.728873       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.739701       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.747431       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.748016       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.776642       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1fee019c9744e93950b4d8d93cb88fa80e7fe6aaab1f11c1690f707230b350e4] <==
	I1018 12:17:57.461604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 12:17:57.463297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:17:58.974912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:02.161409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:05.813191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 12:18:17.561880       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:18:17.562074       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:18:17.562215       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:18:17.583108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:18:17.583220       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:18:17.588971       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:18:17.589380       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:18:17.589440       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:17.591801       1 config.go:200] "Starting service config controller"
	I1018 12:18:17.591824       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:18:17.591969       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:18:17.591982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:18:17.592063       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:18:17.592142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:18:17.594131       1 config.go:309] "Starting node config controller"
	I1018 12:18:17.594383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:18:17.594488       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:18:17.692212       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:18:17.692214       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:18:17.692252       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9] <==
	I1018 12:16:45.970560       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:16:46.054142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:16:46.161281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:16:46.161456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:16:46.161748       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:16:46.207415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:16:46.207635       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:16:46.216525       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:16:46.217577       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:16:46.217735       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:46.226110       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:16:46.226290       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:16:46.226688       1 config.go:200] "Starting service config controller"
	I1018 12:16:46.226786       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:16:46.227207       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:16:46.227310       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:16:46.227905       1 config.go:309] "Starting node config controller"
	I1018 12:16:46.228051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:16:46.228141       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:16:46.326851       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:16:46.326923       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:16:46.327577       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [42cfd21f04099178773cb63ede1529b3067d261e15361d79ce7607d398c1864c] <==
	E1018 12:18:01.438340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:01.587366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:01.724385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:01.791336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:01.913550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:18:04.626394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:18:05.022677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:18:05.148542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:18:05.232126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:18:05.568461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:18:05.600149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:18:05.707207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:18:05.755268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:18:06.011658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:06.575506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:18:06.743469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:18:06.897246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:18:06.963244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:18:07.045131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:18:07.373861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:07.634239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:07.654726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:07.773832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:07.853320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 12:18:15.930104       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0] <==
	E1018 12:16:37.201070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:37.201127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:16:37.201172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:37.201206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:37.201238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:16:37.201274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:16:37.201312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:16:37.201354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:16:37.201388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:16:37.201418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:16:37.201456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:16:38.021893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:16:38.079769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:16:38.083144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:16:38.101016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:38.105387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:38.106576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:38.116416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:16:38.152432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 12:16:40.773457       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:53.851512       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:17:53.851621       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:17:53.851632       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:17:53.851667       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:17:53.851682       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 12:33:07 functional-955523 kubelet[4605]: E1018 12:33:07.688627    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:33:09 functional-955523 kubelet[4605]: E1018 12:33:09.690163    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:33:12 functional-955523 kubelet[4605]: E1018 12:33:12.689851    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:33:14 functional-955523 kubelet[4605]: E1018 12:33:14.688888    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:33:16 functional-955523 kubelet[4605]: E1018 12:33:16.689075    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:33:20 functional-955523 kubelet[4605]: E1018 12:33:20.688380    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:33:21 functional-955523 kubelet[4605]: E1018 12:33:21.689497    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:33:23 functional-955523 kubelet[4605]: E1018 12:33:23.689962    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:33:24 functional-955523 kubelet[4605]: E1018 12:33:24.688967    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:33:29 functional-955523 kubelet[4605]: E1018 12:33:29.689318    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:33:29 functional-955523 kubelet[4605]: E1018 12:33:29.690423    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:33:33 functional-955523 kubelet[4605]: E1018 12:33:33.689394    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:33:34 functional-955523 kubelet[4605]: E1018 12:33:34.689154    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:33:35 functional-955523 kubelet[4605]: E1018 12:33:35.688516    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:33:36 functional-955523 kubelet[4605]: E1018 12:33:36.689748    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:33:44 functional-955523 kubelet[4605]: E1018 12:33:44.690254    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:33:44 functional-955523 kubelet[4605]: E1018 12:33:44.690874    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:33:44 functional-955523 kubelet[4605]: E1018 12:33:44.692095    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:33:47 functional-955523 kubelet[4605]: E1018 12:33:47.689305    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:33:48 functional-955523 kubelet[4605]: E1018 12:33:48.690965    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:33:51 functional-955523 kubelet[4605]: E1018 12:33:51.689689    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:33:55 functional-955523 kubelet[4605]: E1018 12:33:55.689018    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:33:57 functional-955523 kubelet[4605]: E1018 12:33:57.689075    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:33:58 functional-955523 kubelet[4605]: E1018 12:33:58.689673    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:34:00 functional-955523 kubelet[4605]: E1018 12:34:00.691774    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	
	
	==> storage-provisioner [e7653bc9c7bce0429bf09242499cd82ae98695c13464ab2a3a7fdd178f7f0e1e] <==
	W1018 12:33:37.857547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:39.860673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:39.867810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:41.871431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:41.875960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:43.879434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:43.883803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:45.887524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:45.892296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:47.895418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:47.899966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:49.903828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:49.910613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:51.914351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:51.919079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:53.922561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:53.928011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:55.931743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:55.938443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:57.942462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:57.946724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:59.949823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:33:59.956318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:34:01.959587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:34:01.964791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe295069734951076b4f9abc072eedf18f67c0e70fdc6189c67ca72bb4c271d6] <==
	I1018 12:17:54.879457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:17:54.890543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-955523 -n functional-955523
helpers_test.go:269: (dbg) Run:  kubectl --context functional-955523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-955523 describe pod busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-955523 describe pod busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk: exit status 1 (135.283429ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:28:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://20f85e75cdfd2160d5fe4664b43359db1ec14ba0b85d3986c72a3a00cfa1c02f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 12:28:52 +0000
	      Finished:     Sat, 18 Oct 2025 12:28:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwzw8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fwzw8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m14s  default-scheduler  Successfully assigned default/busybox-mount to functional-955523
	  Normal  Pulling    5m13s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m11s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.124s (2.124s including waiting). Image size: 1935750 bytes.
	  Normal  Created    5m11s  kubelet            Created container: mount-munger
	  Normal  Started    5m11s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-sgbjm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:18:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grlqr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-grlqr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sgbjm to functional-955523
	  Warning  Failed     15m                 kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x4 over 15m)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    19s (x65 over 15m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     19s (x65 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-486zs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:28:43 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49nxk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-49nxk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m20s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-486zs to functional-955523
	  Warning  Failed     3m48s (x3 over 5m4s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m24s (x5 over 5m19s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     2m23s (x2 over 5m19s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m23s (x5 over 5m19s)  kubelet            Error: ErrImagePull
	  Warning  Failed     16s (x20 over 5m18s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    1s (x21 over 5m18s)    kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:18:42 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jtx6g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jtx6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  15m                 default-scheduler  Successfully assigned default/nginx-svc to functional-955523
	  Warning  Failed     14m                 kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    12m (x5 over 15m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12m (x4 over 15m)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     12m (x5 over 15m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    12s (x64 over 15m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     12s (x64 over 15m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:24:42 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsxg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wsxg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m21s                  default-scheduler  Successfully assigned default/sp-pod to functional-955523
	  Warning  Failed     7m56s                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    6m31s (x5 over 9m21s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     6m30s (x4 over 9m20s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     6m30s (x5 over 9m20s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4m9s (x21 over 9m20s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m9s (x21 over 9m20s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-zxtmh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-htgrk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-955523 describe pod busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-955523 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-955523 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-486zs" [a5b67887-0934-4170-9202-17973ef3bc1b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-955523 -n functional-955523
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 12:38:44.353914098 +0000 UTC m=+2283.611552216
functional_test.go:1645: (dbg) Run:  kubectl --context functional-955523 describe po hello-node-connect-7d85dfc575-486zs -n default
functional_test.go:1645: (dbg) kubectl --context functional-955523 describe po hello-node-connect-7d85dfc575-486zs -n default:
Name:             hello-node-connect-7d85dfc575-486zs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-955523/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:28:43 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49nxk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-49nxk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-486zs to functional-955523
Warning  Failed     8m29s (x3 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x2 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m4s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-955523 logs hello-node-connect-7d85dfc575-486zs -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-955523 logs hello-node-connect-7d85dfc575-486zs -n default: exit status 1 (106.388367ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-486zs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-955523 logs hello-node-connect-7d85dfc575-486zs -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-955523 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-486zs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-955523/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:28:43 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49nxk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-49nxk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-486zs to functional-955523
Warning  Failed     8m29s (x3 over 9m45s)   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x2 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m4s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-955523 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-955523 logs -l app=hello-node-connect: exit status 1 (89.338641ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-486zs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-955523 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-955523 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.178.61
IPs:                      10.110.178.61
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30150/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-955523
helpers_test.go:243: (dbg) docker inspect functional-955523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c",
	        "Created": "2025-10-18T12:16:14.008246334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2107283,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:14.069510242Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/hosts",
	        "LogPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c-json.log",
	        "Name": "/functional-955523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-955523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-955523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c",
	                "LowerDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8-init/diff:/var/lib/docker/overlay2/647b2423f8222638985dff90791465004ec84c7fd61ca3830bba92bce09f80ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-955523",
	                "Source": "/var/lib/docker/volumes/functional-955523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-955523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-955523",
	                "name.minikube.sigs.k8s.io": "functional-955523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adfe4b4bccae20f103c88e75ee04efa9395565011b987ceb79a51e3a57d55dca",
	            "SandboxKey": "/var/run/docker/netns/adfe4b4bccae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35709"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35710"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35713"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35711"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35712"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-955523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:d4:c2:3f:ec:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "42cfb9c176848d2ffeccdf17874138cf42d5bcd8128808bcdc9dac0a8534a110",
	                    "EndpointID": "c0b036060bcdeb0e9c3fb4f11cc997807f67c06fafda01adb14cd2a83f1d025d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-955523",
	                        "e31280ad3c62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-955523 -n functional-955523
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 logs -n 25: (1.454061321s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-955523 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh            │ functional-955523 ssh -- ls -la /mount-9p                                                                          │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh            │ functional-955523 ssh sudo umount -f /mount-9p                                                                     │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ mount          │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount1 --alsologtostderr -v=1 │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ mount          │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount2 --alsologtostderr -v=1 │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ mount          │ -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount3 --alsologtostderr -v=1 │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh            │ functional-955523 ssh findmnt -T /mount1                                                                           │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ ssh            │ functional-955523 ssh findmnt -T /mount1                                                                           │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh            │ functional-955523 ssh findmnt -T /mount2                                                                           │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ ssh            │ functional-955523 ssh findmnt -T /mount3                                                                           │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ mount          │ -p functional-955523 --kill=true                                                                                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ start          │ -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ start          │ -p functional-955523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd              │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ start          │ -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd    │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:29 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-955523 --alsologtostderr -v=1                                                     │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:29 UTC │                     │
	│ update-context │ functional-955523 update-context --alsologtostderr -v=2                                                            │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ update-context │ functional-955523 update-context --alsologtostderr -v=2                                                            │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ update-context │ functional-955523 update-context --alsologtostderr -v=2                                                            │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ image          │ functional-955523 image ls --format short --alsologtostderr                                                        │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ image          │ functional-955523 image ls --format yaml --alsologtostderr                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ ssh            │ functional-955523 ssh pgrep buildkitd                                                                              │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │                     │
	│ image          │ functional-955523 image build -t localhost/my-image:functional-955523 testdata/build --alsologtostderr             │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ image          │ functional-955523 image ls                                                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ image          │ functional-955523 image ls --format json --alsologtostderr                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	│ image          │ functional-955523 image ls --format table --alsologtostderr                                                        │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:34 UTC │ 18 Oct 25 12:34 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:29:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:29:00.371311 2121122 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:29:00.371573 2121122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:00.371611 2121122 out.go:374] Setting ErrFile to fd 2...
	I1018 12:29:00.371634 2121122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:00.373612 2121122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:29:00.374353 2121122 out.go:368] Setting JSON to false
	I1018 12:29:00.375500 2121122 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51088,"bootTime":1760739453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:29:00.375628 2121122 start.go:141] virtualization:  
	I1018 12:29:00.380667 2121122 out.go:179] * [functional-955523] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 12:29:00.387237 2121122 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:29:00.387251 2121122 notify.go:220] Checking for updates...
	I1018 12:29:00.394014 2121122 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:29:00.396938 2121122 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:29:00.400032 2121122 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:29:00.403051 2121122 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:29:00.406164 2121122 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:29:00.409911 2121122 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:29:00.410613 2121122 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:29:00.440054 2121122 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:29:00.440215 2121122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:29:00.503497 2121122 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:29:00.49269586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:29:00.503625 2121122 docker.go:318] overlay module found
	I1018 12:29:00.506801 2121122 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 12:29:00.509581 2121122 start.go:305] selected driver: docker
	I1018 12:29:00.509606 2121122 start.go:925] validating driver "docker" against &{Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:29:00.509727 2121122 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:29:00.513382 2121122 out.go:203] 
	W1018 12:29:00.516345 2121122 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 12:29:00.519132 2121122 out.go:203] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20f85e75cdfd2       1611cd07b61d5       9 minutes ago       Exited              mount-munger              0                   14f4f12d6989b       busybox-mount                               default
	e7653bc9c7bce       ba04bb24b9575       20 minutes ago      Running             storage-provisioner       2                   8fd44ed1df230       storage-provisioner                         kube-system
	5b3b426b0241c       43911e833d64d       20 minutes ago      Running             kube-apiserver            0                   94a8c004f82cd       kube-apiserver-functional-955523            kube-system
	a0c96f46d08ea       7eb2c6ff0c5a7       20 minutes ago      Running             kube-controller-manager   2                   0a85e91787857       kube-controller-manager-functional-955523   kube-system
	3d6e189351ed9       a1894772a478e       20 minutes ago      Running             etcd                      1                   831fb163c52b3       etcd-functional-955523                      kube-system
	fe29506973495       ba04bb24b9575       20 minutes ago      Exited              storage-provisioner       1                   8fd44ed1df230       storage-provisioner                         kube-system
	9768e1a243da3       b1a8c6f707935       20 minutes ago      Running             kindnet-cni               1                   39f7db02a4760       kindnet-g62kl                               kube-system
	1fee019c9744e       05baa95f5142d       20 minutes ago      Running             kube-proxy                1                   76b1c91847749       kube-proxy-wp97m                            kube-system
	49bcc42178cf4       7eb2c6ff0c5a7       20 minutes ago      Exited              kube-controller-manager   1                   0a85e91787857       kube-controller-manager-functional-955523   kube-system
	42cfd21f04099       b5f57ec6b9867       20 minutes ago      Running             kube-scheduler            1                   5a40d8c62b89c       kube-scheduler-functional-955523            kube-system
	482c932303b97       138784d87c9c5       20 minutes ago      Running             coredns                   1                   838d864c0cd20       coredns-66bc5c9577-jfd97                    kube-system
	176afc34450ef       138784d87c9c5       21 minutes ago      Exited              coredns                   0                   838d864c0cd20       coredns-66bc5c9577-jfd97                    kube-system
	65c34c830786f       05baa95f5142d       22 minutes ago      Exited              kube-proxy                0                   76b1c91847749       kube-proxy-wp97m                            kube-system
	d6f640024b52e       b1a8c6f707935       22 minutes ago      Exited              kindnet-cni               0                   39f7db02a4760       kindnet-g62kl                               kube-system
	ffdc1092e749b       b5f57ec6b9867       22 minutes ago      Exited              kube-scheduler            0                   5a40d8c62b89c       kube-scheduler-functional-955523            kube-system
	091437cb53c82       a1894772a478e       22 minutes ago      Exited              etcd                      0                   831fb163c52b3       etcd-functional-955523                      kube-system
	
	
	==> containerd <==
	Oct 18 12:34:36 functional-955523 containerd[3606]: time="2025-10-18T12:34:36.689916590Z" level=info msg="PullImage \"kicbase/echo-server:latest\""
	Oct 18 12:34:36 functional-955523 containerd[3606]: time="2025-10-18T12:34:36.692349597Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:36 functional-955523 containerd[3606]: time="2025-10-18T12:34:36.829135511Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:37 functional-955523 containerd[3606]: time="2025-10-18T12:34:37.104636473Z" level=error msg="PullImage \"kicbase/echo-server:latest\" failed" error="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:34:37 functional-955523 containerd[3606]: time="2025-10-18T12:34:37.104738403Z" level=info msg="stop pulling image docker.io/kicbase/echo-server:latest: active requests=0, bytes read=10999"
	Oct 18 12:34:37 functional-955523 containerd[3606]: time="2025-10-18T12:34:37.690127899Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 18 12:34:37 functional-955523 containerd[3606]: time="2025-10-18T12:34:37.692302641Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:37 functional-955523 containerd[3606]: time="2025-10-18T12:34:37.831255247Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:38 functional-955523 containerd[3606]: time="2025-10-18T12:34:38.116646232Z" level=error msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:34:38 functional-955523 containerd[3606]: time="2025-10-18T12:34:38.116695896Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=11047"
	Oct 18 12:34:44 functional-955523 containerd[3606]: time="2025-10-18T12:34:44.690346566Z" level=info msg="PullImage \"docker.io/nginx:alpine\""
	Oct 18 12:34:44 functional-955523 containerd[3606]: time="2025-10-18T12:34:44.692959178Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:44 functional-955523 containerd[3606]: time="2025-10-18T12:34:44.816348805Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:45 functional-955523 containerd[3606]: time="2025-10-18T12:34:45.137021139Z" level=error msg="PullImage \"docker.io/nginx:alpine\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:alpine\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:34:45 functional-955523 containerd[3606]: time="2025-10-18T12:34:45.137175309Z" level=info msg="stop pulling image docker.io/library/nginx:alpine: active requests=0, bytes read=10966"
	Oct 18 12:34:49 functional-955523 containerd[3606]: time="2025-10-18T12:34:49.691031892Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Oct 18 12:34:49 functional-955523 containerd[3606]: time="2025-10-18T12:34:49.693509780Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:49 functional-955523 containerd[3606]: time="2025-10-18T12:34:49.821149184Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:34:50 functional-955523 containerd[3606]: time="2025-10-18T12:34:50.092955771Z" level=error msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" failed" error="failed to pull and unpack image \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:34:50 functional-955523 containerd[3606]: time="2025-10-18T12:34:50.092996426Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=11015"
	Oct 18 12:35:25 functional-955523 containerd[3606]: time="2025-10-18T12:35:25.688997517Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 18 12:35:25 functional-955523 containerd[3606]: time="2025-10-18T12:35:25.691434413Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:35:25 functional-955523 containerd[3606]: time="2025-10-18T12:35:25.850327129Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:35:26 functional-955523 containerd[3606]: time="2025-10-18T12:35:26.105315605Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:35:26 functional-955523 containerd[3606]: time="2025-10-18T12:35:26.105372736Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	
	
	==> coredns [176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55783 - 13646 "HINFO IN 2965834670196057044.7525666574534054471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02149907s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [482c932303b97dc57ee5c86e642c514752840fdd30f0d7f8d0538d0e0ef2de95] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50117 - 8560 "HINFO IN 9133458471581067057.4151858571828751761. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03122877s
	
	
	==> describe nodes <==
	Name:               functional-955523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-955523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=functional-955523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-955523
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:38:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:37:26 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:37:26 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:37:26 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:37:26 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-955523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                68d1fac0-3a30-4775-ac10-1725872276da
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sgbjm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  default                     hello-node-connect-7d85dfc575-486zs           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-jfd97                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     22m
	  kube-system                 etcd-functional-955523                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         22m
	  kube-system                 kindnet-g62kl                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      22m
	  kube-system                 kube-apiserver-functional-955523              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-functional-955523     200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-wp97m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-functional-955523              100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-zxtmh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-htgrk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 22m                kube-proxy       
	  Normal   Starting                 20m                kube-proxy       
	  Normal   NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 22m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  22m                kubelet          Node functional-955523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22m                kubelet          Node functional-955523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22m                kubelet          Node functional-955523 status is now: NodeHasSufficientPID
	  Normal   Starting                 22m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           22m                node-controller  Node functional-955523 event: Registered Node functional-955523 in Controller
	  Normal   NodeReady                21m                kubelet          Node functional-955523 status is now: NodeReady
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node functional-955523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node functional-955523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node functional-955523 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           20m                node-controller  Node functional-955523 event: Registered Node functional-955523 in Controller
	
	
	==> dmesg <==
	[Oct18 11:37] overlayfs: idmapped layers are currently not supported
	[Oct18 11:38] overlayfs: idmapped layers are currently not supported
	[Oct18 11:40] overlayfs: idmapped layers are currently not supported
	[Oct18 11:42] overlayfs: idmapped layers are currently not supported
	[Oct18 11:43] overlayfs: idmapped layers are currently not supported
	[ +44.292171] overlayfs: idmapped layers are currently not supported
	[  +9.552091] overlayfs: idmapped layers are currently not supported
	[Oct18 11:44] overlayfs: idmapped layers are currently not supported
	[Oct18 11:45] overlayfs: idmapped layers are currently not supported
	[Oct18 11:47] overlayfs: idmapped layers are currently not supported
	[ +55.826989] overlayfs: idmapped layers are currently not supported
	[Oct18 11:48] overlayfs: idmapped layers are currently not supported
	[Oct18 11:49] overlayfs: idmapped layers are currently not supported
	[Oct18 11:50] overlayfs: idmapped layers are currently not supported
	[Oct18 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.885672] overlayfs: idmapped layers are currently not supported
	[ +14.381354] overlayfs: idmapped layers are currently not supported
	[Oct18 11:52] overlayfs: idmapped layers are currently not supported
	[Oct18 11:53] overlayfs: idmapped layers are currently not supported
	[Oct18 11:54] overlayfs: idmapped layers are currently not supported
	[Oct18 11:55] overlayfs: idmapped layers are currently not supported
	[ +48.139503] overlayfs: idmapped layers are currently not supported
	[Oct18 11:56] overlayfs: idmapped layers are currently not supported
	[Oct18 11:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:00] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576] <==
	{"level":"warn","ts":"2025-10-18T12:16:35.853241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.871922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.892443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.920122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.933591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.966622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:36.067935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36508","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:18:04.063823Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:18:04.064061Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-955523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T12:18:04.064301Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:18:04.065014Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:18:04.065054Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.065073Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T12:18:04.065153Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T12:18:04.065165Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065413Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065450Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:18:04.065458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065538Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065566Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:18:04.065575Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.068417Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T12:18:04.068507Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.068547Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T12:18:04.068554Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-955523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [3d6e189351ed915d09d07de968fbf97a1f10801b7148a5315c28032aa8ee2b6c] <==
	{"level":"warn","ts":"2025-10-18T12:18:11.971661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.988762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.011419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.023564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.040039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.056960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.072777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.088231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.104610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.118084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.133553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.155135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.182136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.197063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.211196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.286942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:28:10.805926Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":950}
	{"level":"info","ts":"2025-10-18T12:28:10.814525Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":950,"took":"8.333821ms","hash":3809581160,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2961408,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-10-18T12:28:10.814577Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3809581160,"revision":950,"compact-revision":-1}
	{"level":"info","ts":"2025-10-18T12:33:10.812573Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1269}
	{"level":"info","ts":"2025-10-18T12:33:10.816199Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1269,"took":"3.297672ms","hash":467796134,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2408448,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2025-10-18T12:33:10.816248Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":467796134,"revision":1269,"compact-revision":950}
	{"level":"info","ts":"2025-10-18T12:38:10.819265Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1793}
	{"level":"info","ts":"2025-10-18T12:38:10.823767Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1793,"took":"4.194633ms","hash":2857167749,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2527232,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2025-10-18T12:38:10.823811Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2857167749,"revision":1793,"compact-revision":1269}
	
	
	==> kernel <==
	 12:38:46 up 14:21,  0 user,  load average: 0.26, 0.26, 0.62
	Linux functional-955523 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9768e1a243da307a5e9d75450f025a8932218255b9e0c16d4a6eb1ad3271fff8] <==
	I1018 12:36:45.262250       1 main.go:301] handling current node
	I1018 12:36:55.268129       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:36:55.268170       1 main.go:301] handling current node
	I1018 12:37:05.260731       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:37:05.260769       1 main.go:301] handling current node
	I1018 12:37:15.264508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:37:15.264545       1 main.go:301] handling current node
	I1018 12:37:25.265968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:37:25.266075       1 main.go:301] handling current node
	I1018 12:37:35.261441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:37:35.261478       1 main.go:301] handling current node
	I1018 12:37:45.267253       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:37:45.267395       1 main.go:301] handling current node
	I1018 12:37:55.265259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:37:55.265384       1 main.go:301] handling current node
	I1018 12:38:05.260861       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:38:05.260897       1 main.go:301] handling current node
	I1018 12:38:15.261421       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:38:15.261456       1 main.go:301] handling current node
	I1018 12:38:25.264318       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:38:25.264355       1 main.go:301] handling current node
	I1018 12:38:35.261146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:38:35.261182       1 main.go:301] handling current node
	I1018 12:38:45.263965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:38:45.264011       1 main.go:301] handling current node
	
	
	==> kindnet [d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a] <==
	I1018 12:16:46.010069       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:16:46.010337       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 12:16:46.010476       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:16:46.010489       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:16:46.010503       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:16:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:16:46.212543       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:16:46.212714       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:16:46.212789       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:16:46.213680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 12:17:16.213129       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 12:17:16.213295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 12:17:16.213395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 12:17:16.214106       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 12:17:17.713809       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:17.713912       1 metrics.go:72] Registering metrics
	I1018 12:17:17.714060       1 controller.go:711] "Syncing nftables rules"
	I1018 12:17:26.215955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:26.215997       1 main.go:301] handling current node
	I1018 12:17:36.213540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:36.213644       1 main.go:301] handling current node
	I1018 12:17:46.216720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:46.216758       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5b3b426b0241c3bc68a439120feb2f099fa5671ef78cf372487f7863c3e46bb6] <==
	I1018 12:18:13.027920       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:18:13.029678       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:18:13.029885       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:18:13.037447       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:18:13.029911       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:18:13.030025       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:18:13.759347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:18:13.807791       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 12:18:14.177814       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 12:18:14.179111       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:18:14.184235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:18:14.653695       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:18:14.818309       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:18:14.897779       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:18:14.906578       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:18:16.498535       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:18:28.340223       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.170.55"}
	I1018 12:18:37.935381       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.114.85"}
	I1018 12:18:42.802569       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.46.35"}
	I1018 12:28:12.945550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:28:44.011433       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.178.61"}
	I1018 12:29:01.585774       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 12:29:01.894459       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.82.92"}
	I1018 12:29:01.915158       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.165.85"}
	I1018 12:38:12.945687       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [49bcc42178cf4017980732207b69c732f49dbb0e1d3cb2a5b51aeda669460337] <==
	I1018 12:17:56.138096       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:17:58.110568       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:17:58.110607       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:58.112555       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:17:58.112780       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:17:58.113192       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:17:58.113348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:18:08.114886       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [a0c96f46d08ea5d6f2ae6eea6e32f62be57d0879bb28524e38800702fc8a9a34] <==
	I1018 12:18:16.339780       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:18:16.340046       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:18:16.340127       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:18:16.340446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:16.340466       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:18:16.340474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:18:16.342596       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:16.342636       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:18:16.342679       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:18:16.342742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:18:16.343729       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:18:16.351676       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:18:16.355886       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:18:16.356056       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:18:16.356211       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-955523"
	I1018 12:18:16.356318       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	E1018 12:29:01.680981       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.702638       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.711515       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.714064       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.728873       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.739701       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.747431       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.748016       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 12:29:01.776642       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1fee019c9744e93950b4d8d93cb88fa80e7fe6aaab1f11c1690f707230b350e4] <==
	I1018 12:17:57.461604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 12:17:57.463297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:17:58.974912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:02.161409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:05.813191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 12:18:17.561880       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:18:17.562074       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:18:17.562215       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:18:17.583108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:18:17.583220       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:18:17.588971       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:18:17.589380       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:18:17.589440       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:17.591801       1 config.go:200] "Starting service config controller"
	I1018 12:18:17.591824       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:18:17.591969       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:18:17.591982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:18:17.592063       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:18:17.592142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:18:17.594131       1 config.go:309] "Starting node config controller"
	I1018 12:18:17.594383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:18:17.594488       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:18:17.692212       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:18:17.692214       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:18:17.692252       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9] <==
	I1018 12:16:45.970560       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:16:46.054142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:16:46.161281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:16:46.161456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:16:46.161748       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:16:46.207415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:16:46.207635       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:16:46.216525       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:16:46.217577       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:16:46.217735       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:46.226110       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:16:46.226290       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:16:46.226688       1 config.go:200] "Starting service config controller"
	I1018 12:16:46.226786       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:16:46.227207       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:16:46.227310       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:16:46.227905       1 config.go:309] "Starting node config controller"
	I1018 12:16:46.228051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:16:46.228141       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:16:46.326851       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:16:46.326923       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:16:46.327577       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [42cfd21f04099178773cb63ede1529b3067d261e15361d79ce7607d398c1864c] <==
	E1018 12:18:01.438340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:01.587366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:01.724385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:01.791336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:01.913550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:18:04.626394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:18:05.022677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:18:05.148542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:18:05.232126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:18:05.568461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:18:05.600149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:18:05.707207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:18:05.755268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:18:06.011658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:06.575506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:18:06.743469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:18:06.897246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:18:06.963244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:18:07.045131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:18:07.373861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:07.634239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:07.654726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:07.773832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:07.853320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 12:18:15.930104       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0] <==
	E1018 12:16:37.201070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:37.201127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:16:37.201172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:37.201206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:37.201238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:16:37.201274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:16:37.201312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:16:37.201354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:16:37.201388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:16:37.201418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:16:37.201456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:16:38.021893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:16:38.079769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:16:38.083144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:16:38.101016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:38.105387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:38.106576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:38.116416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:16:38.152432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 12:16:40.773457       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:53.851512       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:17:53.851621       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:17:53.851632       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:17:53.851667       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:17:53.851682       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 12:37:53 functional-955523 kubelet[4605]: E1018 12:37:53.690131    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:37:56 functional-955523 kubelet[4605]: E1018 12:37:56.689413    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:37:56 functional-955523 kubelet[4605]: E1018 12:37:56.690733    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:37:59 functional-955523 kubelet[4605]: E1018 12:37:59.688972    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:37:59 functional-955523 kubelet[4605]: E1018 12:37:59.690220    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:38:02 functional-955523 kubelet[4605]: E1018 12:38:02.689891    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:38:04 functional-955523 kubelet[4605]: E1018 12:38:04.689535    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:38:09 functional-955523 kubelet[4605]: E1018 12:38:09.689195    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:38:10 functional-955523 kubelet[4605]: E1018 12:38:10.689353    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:38:14 functional-955523 kubelet[4605]: E1018 12:38:14.688560    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:38:14 functional-955523 kubelet[4605]: E1018 12:38:14.690259    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:38:16 functional-955523 kubelet[4605]: E1018 12:38:16.689633    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:38:17 functional-955523 kubelet[4605]: E1018 12:38:17.688978    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:38:23 functional-955523 kubelet[4605]: E1018 12:38:23.689156    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:38:24 functional-955523 kubelet[4605]: E1018 12:38:24.688520    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:38:27 functional-955523 kubelet[4605]: E1018 12:38:27.688970    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:38:27 functional-955523 kubelet[4605]: E1018 12:38:27.690120    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:38:28 functional-955523 kubelet[4605]: E1018 12:38:28.690499    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:38:32 functional-955523 kubelet[4605]: E1018 12:38:32.689610    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:38:36 functional-955523 kubelet[4605]: E1018 12:38:36.689331    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	Oct 18 12:38:39 functional-955523 kubelet[4605]: E1018 12:38:39.688467    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:38:39 functional-955523 kubelet[4605]: E1018 12:38:39.688911    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:38:39 functional-955523 kubelet[4605]: E1018 12:38:39.690054    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/metrics-scraper/manifests/sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-zxtmh" podUID="1cfdcb4a-8c50-411e-9d24-89f2
f73f8c37"
	Oct 18 12:38:39 functional-955523 kubelet[4605]: E1018 12:38:39.690495    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-htgrk" podUID="7d8ba8bf-3ba1-46ff-ad2a-80d36cf64c16"
	Oct 18 12:38:44 functional-955523 kubelet[4605]: E1018 12:38:44.691290    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	
	
	==> storage-provisioner [e7653bc9c7bce0429bf09242499cd82ae98695c13464ab2a3a7fdd178f7f0e1e] <==
	W1018 12:38:21.194813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:23.198157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:23.202647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:25.206003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:25.210671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:27.213702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:27.218730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:29.222302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:29.226495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:31.229479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:31.233700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:33.236712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:33.243434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:35.246119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:35.250669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:37.253481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:37.257778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:39.261337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:39.266392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:41.269511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:41.273814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:43.276405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:43.281085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:45.294679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:38:45.307459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe295069734951076b4f9abc072eedf18f67c0e70fdc6189c67ca72bb4c271d6] <==
	I1018 12:17:54.879457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:17:54.890543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-955523 -n functional-955523
E1018 12:38:46.997255 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context functional-955523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-955523 describe pod busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-955523 describe pod busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk: exit status 1 (123.034675ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:28:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  containerd://20f85e75cdfd2160d5fe4664b43359db1ec14ba0b85d3986c72a3a00cfa1c02f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 12:28:52 +0000
	      Finished:     Sat, 18 Oct 2025 12:28:52 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fwzw8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fwzw8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m58s  default-scheduler  Successfully assigned default/busybox-mount to functional-955523
	  Normal  Pulling    9m57s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m55s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.124s (2.124s including waiting). Image size: 1935750 bytes.
	  Normal  Created    9m55s  kubelet            Created container: mount-munger
	  Normal  Started    9m55s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-sgbjm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:18:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grlqr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-grlqr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  20m                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sgbjm to functional-955523
	  Warning  Failed     20m                kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    17m (x5 over 20m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     17m (x5 over 20m)  kubelet            Error: ErrImagePull
	  Warning  Failed     17m (x4 over 19m)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x87 over 20m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     8s (x87 over 20m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-486zs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:28:43 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49nxk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-49nxk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-486zs to functional-955523
	  Warning  Failed     8m32s (x3 over 9m48s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m8s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x2 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m7s (x5 over 10m)     kubelet            Error: ErrImagePull
	  Warning  Failed     5m (x20 over 10m)      kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m45s (x21 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:18:42 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jtx6g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jtx6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                From               Message
	  ----     ------     ----               ----               -------
	  Normal   Scheduled  20m                default-scheduler  Successfully assigned default/nginx-svc to functional-955523
	  Warning  Failed     18m                kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    17m (x5 over 20m)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     17m (x4 over 20m)  kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     17m (x5 over 20m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    3s (x85 over 20m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3s (x85 over 20m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:24:42 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsxg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wsxg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  14m                  default-scheduler  Successfully assigned default/sp-pod to functional-955523
	  Warning  Failed     12m                  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    11m (x5 over 14m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     11m (x4 over 14m)    kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     11m (x5 over 14m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m1s (x42 over 14m)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4m1s (x42 over 14m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-zxtmh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-htgrk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-955523 describe pod busybox-mount hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod dashboard-metrics-scraper-77bf4d6c4c-zxtmh kubernetes-dashboard-855c9754f9-htgrk: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.74s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (249.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [252306de-14d4-42cd-92fd-546202cc84dd] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004208506s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-955523 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-955523 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-955523 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-955523 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d355d915-4156-4d0d-b780-f3f53fb401a3] Pending
helpers_test.go:352: "sp-pod" [d355d915-4156-4d0d-b780-f3f53fb401a3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-955523 -n functional-955523
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-10-18 12:28:42.658805547 +0000 UTC m=+1681.916443656
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-955523 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-955523 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-955523/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:24:42 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:  10.244.0.6
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsxg9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-wsxg9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/sp-pod to functional-955523
Warning  Failed     2m35s                kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    70s (x5 over 4m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     69s (x4 over 3m59s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     69s (x5 over 3m59s)  kubelet            Error: ErrImagePull
Normal   BackOff    5s (x15 over 3m59s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     5s (x15 over 3m59s)  kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-955523 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-955523 logs sp-pod -n default: exit status 1 (152.418284ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-955523 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-955523
helpers_test.go:243: (dbg) docker inspect functional-955523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c",
	        "Created": "2025-10-18T12:16:14.008246334Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2107283,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T12:16:14.069510242Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/hostname",
	        "HostsPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/hosts",
	        "LogPath": "/var/lib/docker/containers/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c/e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c-json.log",
	        "Name": "/functional-955523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-955523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-955523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e31280ad3c62546f519de43b39b41efc71cb382d2e2221ea573a27b602b3a84c",
	                "LowerDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8-init/diff:/var/lib/docker/overlay2/647b2423f8222638985dff90791465004ec84c7fd61ca3830bba92bce09f80ef/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e9eaae7536306379e0cae21c86cbbf1542a5c605653dc40dc5bea4a058bdfb8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-955523",
	                "Source": "/var/lib/docker/volumes/functional-955523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-955523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-955523",
	                "name.minikube.sigs.k8s.io": "functional-955523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "adfe4b4bccae20f103c88e75ee04efa9395565011b987ceb79a51e3a57d55dca",
	            "SandboxKey": "/var/run/docker/netns/adfe4b4bccae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35709"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35710"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35713"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35711"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35712"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-955523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:d4:c2:3f:ec:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "42cfb9c176848d2ffeccdf17874138cf42d5bcd8128808bcdc9dac0a8534a110",
	                    "EndpointID": "c0b036060bcdeb0e9c3fb4f11cc997807f67c06fafda01adb14cd2a83f1d025d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-955523",
	                        "e31280ad3c62"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-955523 -n functional-955523
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 logs -n 25: (1.750052501s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-955523 ssh sudo cat /etc/ssl/certs/20769612.pem                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image ls                                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ ssh     │ functional-955523 ssh sudo cat /usr/share/ca-certificates/20769612.pem                                                                                          │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image load --daemon kicbase/echo-server:functional-955523 --alsologtostderr                                                                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ ssh     │ functional-955523 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ ssh     │ functional-955523 ssh sudo cat /etc/test/nested/copy/2076961/hosts                                                                                              │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image ls                                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image load --daemon kicbase/echo-server:functional-955523 --alsologtostderr                                                                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image ls                                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image save kicbase/echo-server:functional-955523 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image rm kicbase/echo-server:functional-955523 --alsologtostderr                                                                              │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image ls                                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image ls                                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ image   │ functional-955523 image save --daemon kicbase/echo-server:functional-955523 --alsologtostderr                                                                   │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ ssh     │ functional-955523 ssh echo hello                                                                                                                                │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ ssh     │ functional-955523 ssh cat /etc/hostname                                                                                                                         │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │ 18 Oct 25 12:18 UTC │
	│ tunnel  │ functional-955523 tunnel --alsologtostderr                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ tunnel  │ functional-955523 tunnel --alsologtostderr                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ tunnel  │ functional-955523 tunnel --alsologtostderr                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:18 UTC │                     │
	│ service │ functional-955523 service list                                                                                                                                  │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ service │ functional-955523 service list -o json                                                                                                                          │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │ 18 Oct 25 12:28 UTC │
	│ service │ functional-955523 service --namespace=default --https --url hello-node                                                                                          │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ service │ functional-955523 service hello-node --url --format={{.IP}}                                                                                                     │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	│ service │ functional-955523 service hello-node --url                                                                                                                      │ functional-955523 │ jenkins │ v1.37.0 │ 18 Oct 25 12:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:17:44
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:17:44.223263 2111595 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:17:44.223360 2111595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:44.223364 2111595 out.go:374] Setting ErrFile to fd 2...
	I1018 12:17:44.223367 2111595 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:17:44.223640 2111595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:17:44.224039 2111595 out.go:368] Setting JSON to false
	I1018 12:17:44.224918 2111595 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":50412,"bootTime":1760739453,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:17:44.224969 2111595 start.go:141] virtualization:  
	I1018 12:17:44.228426 2111595 out.go:179] * [functional-955523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:17:44.231570 2111595 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:17:44.231637 2111595 notify.go:220] Checking for updates...
	I1018 12:17:44.235444 2111595 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:17:44.238353 2111595 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:17:44.241220 2111595 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:17:44.244089 2111595 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:17:44.246907 2111595 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:17:44.250089 2111595 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:17:44.250174 2111595 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:17:44.273617 2111595 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:17:44.273748 2111595 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:44.350100 2111595 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 12:17:44.340514305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:17:44.350196 2111595 docker.go:318] overlay module found
	I1018 12:17:44.353696 2111595 out.go:179] * Using the docker driver based on existing profile
	I1018 12:17:44.356562 2111595 start.go:305] selected driver: docker
	I1018 12:17:44.356571 2111595 start.go:925] validating driver "docker" against &{Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:44.356667 2111595 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:17:44.356764 2111595 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:17:44.408319 2111595 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 12:17:44.399417907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:17:44.408827 2111595 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:17:44.408847 2111595 cni.go:84] Creating CNI manager for ""
	I1018 12:17:44.408900 2111595 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:17:44.408950 2111595 start.go:349] cluster config:
	{Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:44.412256 2111595 out.go:179] * Starting "functional-955523" primary control-plane node in "functional-955523" cluster
	I1018 12:17:44.415035 2111595 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1018 12:17:44.417769 2111595 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:17:44.420512 2111595 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:17:44.420579 2111595 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1018 12:17:44.420587 2111595 cache.go:58] Caching tarball of preloaded images
	I1018 12:17:44.420588 2111595 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:17:44.420673 2111595 preload.go:233] Found /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1018 12:17:44.420681 2111595 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1018 12:17:44.420786 2111595 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/config.json ...
	I1018 12:17:44.439299 2111595 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 12:17:44.439310 2111595 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 12:17:44.439329 2111595 cache.go:232] Successfully downloaded all kic artifacts
	I1018 12:17:44.439359 2111595 start.go:360] acquireMachinesLock for functional-955523: {Name:mk174645edeca81e3c91bee769da3a9dd5d80091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 12:17:44.439422 2111595 start.go:364] duration metric: took 46.826µs to acquireMachinesLock for "functional-955523"
	I1018 12:17:44.439441 2111595 start.go:96] Skipping create...Using existing machine configuration
	I1018 12:17:44.439446 2111595 fix.go:54] fixHost starting: 
	I1018 12:17:44.439702 2111595 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
	I1018 12:17:44.457151 2111595 fix.go:112] recreateIfNeeded on functional-955523: state=Running err=<nil>
	W1018 12:17:44.457170 2111595 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 12:17:44.460539 2111595 out.go:252] * Updating the running docker "functional-955523" container ...
	I1018 12:17:44.460564 2111595 machine.go:93] provisionDockerMachine start ...
	I1018 12:17:44.460652 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:44.482100 2111595 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:44.482422 2111595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35709 <nil> <nil>}
	I1018 12:17:44.482428 2111595 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 12:17:44.631351 2111595 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-955523
	
	I1018 12:17:44.631363 2111595 ubuntu.go:182] provisioning hostname "functional-955523"
	I1018 12:17:44.631429 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:44.648898 2111595 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:44.649211 2111595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35709 <nil> <nil>}
	I1018 12:17:44.649219 2111595 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-955523 && echo "functional-955523" | sudo tee /etc/hostname
	I1018 12:17:44.806014 2111595 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-955523
	
	I1018 12:17:44.806084 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:44.823764 2111595 main.go:141] libmachine: Using SSH client type: native
	I1018 12:17:44.824233 2111595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eeee0] 0x3f16a0 <nil>  [] 0s} 127.0.0.1 35709 <nil> <nil>}
	I1018 12:17:44.824249 2111595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-955523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-955523/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-955523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 12:17:44.972018 2111595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 12:17:44.972032 2111595 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21647-2075029/.minikube CaCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21647-2075029/.minikube}
	I1018 12:17:44.972048 2111595 ubuntu.go:190] setting up certificates
	I1018 12:17:44.972058 2111595 provision.go:84] configureAuth start
	I1018 12:17:44.972115 2111595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-955523
	I1018 12:17:44.988657 2111595 provision.go:143] copyHostCerts
	I1018 12:17:44.988714 2111595 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem, removing ...
	I1018 12:17:44.988732 2111595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem
	I1018 12:17:44.988794 2111595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/key.pem (1675 bytes)
	I1018 12:17:44.988895 2111595 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem, removing ...
	I1018 12:17:44.988899 2111595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem
	I1018 12:17:44.988919 2111595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.pem (1078 bytes)
	I1018 12:17:44.988981 2111595 exec_runner.go:144] found /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem, removing ...
	I1018 12:17:44.988985 2111595 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem
	I1018 12:17:44.989003 2111595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21647-2075029/.minikube/cert.pem (1123 bytes)
	I1018 12:17:44.989056 2111595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem org=jenkins.functional-955523 san=[127.0.0.1 192.168.49.2 functional-955523 localhost minikube]
	I1018 12:17:45.063273 2111595 provision.go:177] copyRemoteCerts
	I1018 12:17:45.063351 2111595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 12:17:45.063404 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:45.102904 2111595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
	I1018 12:17:45.239174 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 12:17:45.264009 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 12:17:45.299971 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 12:17:45.342857 2111595 provision.go:87] duration metric: took 370.770547ms to configureAuth
	I1018 12:17:45.342877 2111595 ubuntu.go:206] setting minikube options for container-runtime
	I1018 12:17:45.343185 2111595 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:17:45.343212 2111595 machine.go:96] duration metric: took 882.632289ms to provisionDockerMachine
	I1018 12:17:45.343224 2111595 start.go:293] postStartSetup for "functional-955523" (driver="docker")
	I1018 12:17:45.343234 2111595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 12:17:45.343300 2111595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 12:17:45.343427 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:45.393186 2111595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
	I1018 12:17:45.510409 2111595 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 12:17:45.514522 2111595 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 12:17:45.514542 2111595 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 12:17:45.514553 2111595 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/addons for local assets ...
	I1018 12:17:45.514616 2111595 filesync.go:126] Scanning /home/jenkins/minikube-integration/21647-2075029/.minikube/files for local assets ...
	I1018 12:17:45.514699 2111595 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-2075029/.minikube/files/etc/ssl/certs/20769612.pem -> 20769612.pem in /etc/ssl/certs
	I1018 12:17:45.514776 2111595 filesync.go:149] local asset: /home/jenkins/minikube-integration/21647-2075029/.minikube/files/etc/test/nested/copy/2076961/hosts -> hosts in /etc/test/nested/copy/2076961
	I1018 12:17:45.514819 2111595 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/2076961
	I1018 12:17:45.522818 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/files/etc/ssl/certs/20769612.pem --> /etc/ssl/certs/20769612.pem (1708 bytes)
	I1018 12:17:45.542172 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/files/etc/test/nested/copy/2076961/hosts --> /etc/test/nested/copy/2076961/hosts (40 bytes)
	I1018 12:17:45.563365 2111595 start.go:296] duration metric: took 220.12344ms for postStartSetup
	I1018 12:17:45.563464 2111595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:17:45.563581 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:45.583714 2111595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
	I1018 12:17:45.685412 2111595 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 12:17:45.690769 2111595 fix.go:56] duration metric: took 1.251315646s for fixHost
	I1018 12:17:45.690794 2111595 start.go:83] releasing machines lock for "functional-955523", held for 1.251354308s
	I1018 12:17:45.690864 2111595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-955523
	I1018 12:17:45.708316 2111595 ssh_runner.go:195] Run: cat /version.json
	I1018 12:17:45.708359 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:45.708378 2111595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 12:17:45.708425 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:17:45.736145 2111595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
	I1018 12:17:45.749447 2111595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
	I1018 12:17:45.937516 2111595 ssh_runner.go:195] Run: systemctl --version
	I1018 12:17:45.944180 2111595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 12:17:45.948573 2111595 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 12:17:45.948643 2111595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 12:17:45.956461 2111595 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 12:17:45.956474 2111595 start.go:495] detecting cgroup driver to use...
	I1018 12:17:45.956503 2111595 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 12:17:45.956550 2111595 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1018 12:17:45.971976 2111595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1018 12:17:45.985276 2111595 docker.go:218] disabling cri-docker service (if available) ...
	I1018 12:17:45.985327 2111595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 12:17:46.000658 2111595 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 12:17:46.015779 2111595 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 12:17:46.157727 2111595 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 12:17:46.304934 2111595 docker.go:234] disabling docker service ...
	I1018 12:17:46.304989 2111595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 12:17:46.322283 2111595 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 12:17:46.336722 2111595 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 12:17:46.479219 2111595 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 12:17:46.624768 2111595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 12:17:46.637381 2111595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 12:17:46.652915 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1018 12:17:46.661463 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1018 12:17:46.669881 2111595 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1018 12:17:46.669936 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1018 12:17:46.678422 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:17:46.687259 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1018 12:17:46.695728 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1018 12:17:46.704100 2111595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 12:17:46.712096 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1018 12:17:46.721323 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1018 12:17:46.729846 2111595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1018 12:17:46.738203 2111595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 12:17:46.745665 2111595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 12:17:46.752566 2111595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:46.910432 2111595 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1018 12:17:47.233446 2111595 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1018 12:17:47.233512 2111595 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1018 12:17:47.237824 2111595 start.go:563] Will wait 60s for crictl version
	I1018 12:17:47.237885 2111595 ssh_runner.go:195] Run: which crictl
	I1018 12:17:47.241835 2111595 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 12:17:47.266673 2111595 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.28
	RuntimeApiVersion:  v1
	I1018 12:17:47.266755 2111595 ssh_runner.go:195] Run: containerd --version
	I1018 12:17:47.290895 2111595 ssh_runner.go:195] Run: containerd --version
	I1018 12:17:47.319733 2111595 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 1.7.28 ...
	I1018 12:17:47.322631 2111595 cli_runner.go:164] Run: docker network inspect functional-955523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 12:17:47.338584 2111595 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 12:17:47.345410 2111595 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1018 12:17:47.348262 2111595 kubeadm.go:883] updating cluster {Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 12:17:47.348376 2111595 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:17:47.348447 2111595 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:47.373834 2111595 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:17:47.373846 2111595 containerd.go:534] Images already preloaded, skipping extraction
	I1018 12:17:47.373904 2111595 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 12:17:47.397510 2111595 containerd.go:627] all images are preloaded for containerd runtime.
	I1018 12:17:47.397522 2111595 cache_images.go:85] Images are preloaded, skipping loading
	I1018 12:17:47.397528 2111595 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 containerd true true} ...
	I1018 12:17:47.397626 2111595 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-955523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 12:17:47.397697 2111595 ssh_runner.go:195] Run: sudo crictl info
	I1018 12:17:47.422291 2111595 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1018 12:17:47.422308 2111595 cni.go:84] Creating CNI manager for ""
	I1018 12:17:47.422320 2111595 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:17:47.422334 2111595 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 12:17:47.422358 2111595 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-955523 NodeName:functional-955523 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 12:17:47.422493 2111595 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-955523"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 12:17:47.422568 2111595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 12:17:47.430373 2111595 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 12:17:47.430434 2111595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 12:17:47.437743 2111595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1018 12:17:47.449653 2111595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 12:17:47.461514 2111595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2080 bytes)
	I1018 12:17:47.475310 2111595 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 12:17:47.479253 2111595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:17:47.610547 2111595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:17:47.623604 2111595 certs.go:69] Setting up /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523 for IP: 192.168.49.2
	I1018 12:17:47.623614 2111595 certs.go:195] generating shared ca certs ...
	I1018 12:17:47.623627 2111595 certs.go:227] acquiring lock for ca certs: {Name:mkb3a5ce8c0a7d3b9a246d80f0747d48f33f9661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:17:47.623759 2111595 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key
	I1018 12:17:47.623803 2111595 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key
	I1018 12:17:47.623809 2111595 certs.go:257] generating profile certs ...
	I1018 12:17:47.623924 2111595 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.key
	I1018 12:17:47.623968 2111595 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/apiserver.key.6dfe3d8c
	I1018 12:17:47.624008 2111595 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/proxy-client.key
	I1018 12:17:47.624107 2111595 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/2076961.pem (1338 bytes)
	W1018 12:17:47.624131 2111595 certs.go:480] ignoring /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/2076961_empty.pem, impossibly tiny 0 bytes
	I1018 12:17:47.624138 2111595 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca-key.pem (1679 bytes)
	I1018 12:17:47.624159 2111595 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/ca.pem (1078 bytes)
	I1018 12:17:47.624184 2111595 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/cert.pem (1123 bytes)
	I1018 12:17:47.624202 2111595 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/key.pem (1675 bytes)
	I1018 12:17:47.624244 2111595 certs.go:484] found cert: /home/jenkins/minikube-integration/21647-2075029/.minikube/files/etc/ssl/certs/20769612.pem (1708 bytes)
	I1018 12:17:47.624816 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 12:17:47.647251 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 12:17:47.666209 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 12:17:47.687156 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 12:17:47.706006 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 12:17:47.723418 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 12:17:47.743153 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 12:17:47.759859 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 12:17:47.783324 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/certs/2076961.pem --> /usr/share/ca-certificates/2076961.pem (1338 bytes)
	I1018 12:17:47.801512 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/files/etc/ssl/certs/20769612.pem --> /usr/share/ca-certificates/20769612.pem (1708 bytes)
	I1018 12:17:47.818878 2111595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 12:17:47.837695 2111595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 12:17:47.850578 2111595 ssh_runner.go:195] Run: openssl version
	I1018 12:17:47.857447 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20769612.pem && ln -fs /usr/share/ca-certificates/20769612.pem /etc/ssl/certs/20769612.pem"
	I1018 12:17:47.866486 2111595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20769612.pem
	I1018 12:17:47.870103 2111595 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 12:16 /usr/share/ca-certificates/20769612.pem
	I1018 12:17:47.870169 2111595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20769612.pem
	I1018 12:17:47.911870 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20769612.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 12:17:47.919817 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 12:17:47.928067 2111595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:47.932168 2111595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 12:01 /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:47.932250 2111595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 12:17:47.975429 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 12:17:47.983112 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2076961.pem && ln -fs /usr/share/ca-certificates/2076961.pem /etc/ssl/certs/2076961.pem"
	I1018 12:17:47.991223 2111595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2076961.pem
	I1018 12:17:47.994895 2111595 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 12:16 /usr/share/ca-certificates/2076961.pem
	I1018 12:17:47.994950 2111595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2076961.pem
	I1018 12:17:48.036870 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2076961.pem /etc/ssl/certs/51391683.0"
	I1018 12:17:48.045745 2111595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 12:17:48.049879 2111595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 12:17:48.091962 2111595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 12:17:48.133824 2111595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 12:17:48.179830 2111595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 12:17:48.220823 2111595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 12:17:48.261961 2111595 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 12:17:48.302819 2111595 kubeadm.go:400] StartCluster: {Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:17:48.302908 2111595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1018 12:17:48.302964 2111595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:17:48.332894 2111595 cri.go:89] found id: "176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489"
	I1018 12:17:48.332906 2111595 cri.go:89] found id: "7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa"
	I1018 12:17:48.332909 2111595 cri.go:89] found id: "65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9"
	I1018 12:17:48.332912 2111595 cri.go:89] found id: "d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a"
	I1018 12:17:48.332915 2111595 cri.go:89] found id: "ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0"
	I1018 12:17:48.332918 2111595 cri.go:89] found id: "da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b"
	I1018 12:17:48.332920 2111595 cri.go:89] found id: "99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be"
	I1018 12:17:48.332923 2111595 cri.go:89] found id: "091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576"
	I1018 12:17:48.332925 2111595 cri.go:89] found id: ""
	I1018 12:17:48.332975 2111595 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1018 12:17:48.360999 2111595 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576","pid":1348,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576/rootfs","created":"2025-10-18T12:16:33.07455445Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"831fb163c52b3193711a6ac66322a75510da2ef2da59ec9a5c3d1d9d345a00a3","io.kubernetes.cri.sandbox-name":"etcd-functional-955523","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4aab879a301ba5656152f895af057c15"},"owner":"root"},{"ociVersion":"1.2.1","id":"0a85e917878570bd2d5b1b4bb21344473889e02e953a4029
b315cc42cc8798ea","pid":1292,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a85e917878570bd2d5b1b4bb21344473889e02e953a4029b315cc42cc8798ea","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a85e917878570bd2d5b1b4bb21344473889e02e953a4029b315cc42cc8798ea/rootfs","created":"2025-10-18T12:16:32.992163194Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"0a85e917878570bd2d5b1b4bb21344473889e02e953a4029b315cc42cc8798ea","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-955523_a68158d63e23e7c33b28186c4b6ba260","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-955523","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a68158d63e23e7c33b28186c4b6ba
260"},"owner":"root"},{"ociVersion":"1.2.1","id":"176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489","pid":2189,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489/rootfs","created":"2025-10-18T12:17:26.88961835Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.12.1","io.kubernetes.cri.sandbox-id":"838d864c0cd2045367e59764bcc4315b92c3c97126c202a3ddf51aecb99e6ccd","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-jfd97","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"293741f4-e73e-4dfc-be4f-6f8965d37fef"},"owner":"root"},{"ociVersion":"1.2.1","id":"39f7db02a4760614685384efa2fb225ae7df235dee2c87232972b884334214ca"
,"pid":1722,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39f7db02a4760614685384efa2fb225ae7df235dee2c87232972b884334214ca","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/39f7db02a4760614685384efa2fb225ae7df235dee2c87232972b884334214ca/rootfs","created":"2025-10-18T12:16:45.508881616Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"39f7db02a4760614685384efa2fb225ae7df235dee2c87232972b884334214ca","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-g62kl_a29be4b4-2781-46a6-aa5b-f36f743f5429","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-g62kl","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a29be4b4-2781-46a6-aa5b-f36f743f5429"},"owner":"root"},{"ociVersion":"1.2.1","id":"5635
6373ae1f93505ccc4b0f458aa4218e5940c2400795bcf284fdd481382040","pid":1279,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/56356373ae1f93505ccc4b0f458aa4218e5940c2400795bcf284fdd481382040","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/56356373ae1f93505ccc4b0f458aa4218e5940c2400795bcf284fdd481382040/rootfs","created":"2025-10-18T12:16:32.972081808Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"56356373ae1f93505ccc4b0f458aa4218e5940c2400795bcf284fdd481382040","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-955523_b13de00f364581310f0dfb4498c1e57d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-955523","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b13
de00f364581310f0dfb4498c1e57d"},"owner":"root"},{"ociVersion":"1.2.1","id":"5a40d8c62b89c07886e16e6176efeb265263fd4ef2e1157fe4d59b1a30f34e80","pid":1299,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a40d8c62b89c07886e16e6176efeb265263fd4ef2e1157fe4d59b1a30f34e80","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a40d8c62b89c07886e16e6176efeb265263fd4ef2e1157fe4d59b1a30f34e80/rootfs","created":"2025-10-18T12:16:32.986880988Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5a40d8c62b89c07886e16e6176efeb265263fd4ef2e1157fe4d59b1a30f34e80","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-955523_d849b12f2ad950b4c2f6907605cf86e2","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-955523","io.kube
rnetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d849b12f2ad950b4c2f6907605cf86e2"},"owner":"root"},{"ociVersion":"1.2.1","id":"65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9","pid":1832,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9/rootfs","created":"2025-10-18T12:16:45.884775967Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.34.1","io.kubernetes.cri.sandbox-id":"76b1c9184774993cc44f9091c6a0bcb84343ebc7e357142ed8358a84685c0b1b","io.kubernetes.cri.sandbox-name":"kube-proxy-wp97m","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"dcb400bb-8e4b-421b-b013-671fef2cf3b8"},"owner":"roo
t"},{"ociVersion":"1.2.1","id":"7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa","pid":2168,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa/rootfs","created":"2025-10-18T12:17:26.878741692Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"8fd44ed1df230f81f600c00a5875a2fdcc0156326ab03aded5d7f3c665f3cd80","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"252306de-14d4-42cd-92fd-546202cc84dd"},"owner":"root"},{"ociVersion":"1.2.1","id":"76b1c9184774993cc44f9091c6a0bcb84343ebc7e357142ed8358a84685c0b1b","pid":
1768,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76b1c9184774993cc44f9091c6a0bcb84343ebc7e357142ed8358a84685c0b1b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76b1c9184774993cc44f9091c6a0bcb84343ebc7e357142ed8358a84685c0b1b/rootfs","created":"2025-10-18T12:16:45.703438281Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"76b1c9184774993cc44f9091c6a0bcb84343ebc7e357142ed8358a84685c0b1b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-wp97m_dcb400bb-8e4b-421b-b013-671fef2cf3b8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-wp97m","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"dcb400bb-8e4b-421b-b013-671fef2cf3b8"},"owner":"root"},{"ociVersion":"1.2.1","id":"831fb163c52b319371
1a6ac66322a75510da2ef2da59ec9a5c3d1d9d345a00a3","pid":1189,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/831fb163c52b3193711a6ac66322a75510da2ef2da59ec9a5c3d1d9d345a00a3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/831fb163c52b3193711a6ac66322a75510da2ef2da59ec9a5c3d1d9d345a00a3/rootfs","created":"2025-10-18T12:16:32.909513862Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"831fb163c52b3193711a6ac66322a75510da2ef2da59ec9a5c3d1d9d345a00a3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-955523_4aab879a301ba5656152f895af057c15","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-955523","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4aab879a301ba5656152f895af057c15"},"o
wner":"root"},{"ociVersion":"1.2.1","id":"838d864c0cd2045367e59764bcc4315b92c3c97126c202a3ddf51aecb99e6ccd","pid":2120,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/838d864c0cd2045367e59764bcc4315b92c3c97126c202a3ddf51aecb99e6ccd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/838d864c0cd2045367e59764bcc4315b92c3c97126c202a3ddf51aecb99e6ccd/rootfs","created":"2025-10-18T12:17:26.748567323Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"838d864c0cd2045367e59764bcc4315b92c3c97126c202a3ddf51aecb99e6ccd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-66bc5c9577-jfd97_293741f4-e73e-4dfc-be4f-6f8965d37fef","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-66bc5c9577-jfd97","io.kubernetes.cri.sandbox-namespace":"kube-sy
stem","io.kubernetes.cri.sandbox-uid":"293741f4-e73e-4dfc-be4f-6f8965d37fef"},"owner":"root"},{"ociVersion":"1.2.1","id":"8fd44ed1df230f81f600c00a5875a2fdcc0156326ab03aded5d7f3c665f3cd80","pid":2086,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fd44ed1df230f81f600c00a5875a2fdcc0156326ab03aded5d7f3c665f3cd80","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fd44ed1df230f81f600c00a5875a2fdcc0156326ab03aded5d7f3c665f3cd80/rootfs","created":"2025-10-18T12:17:26.731621018Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"8fd44ed1df230f81f600c00a5875a2fdcc0156326ab03aded5d7f3c665f3cd80","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_252306de-14d4-42cd-92fd-546202cc84dd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage
-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"252306de-14d4-42cd-92fd-546202cc84dd"},"owner":"root"},{"ociVersion":"1.2.1","id":"99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be","pid":1390,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be/rootfs","created":"2025-10-18T12:16:33.16115658Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.34.1","io.kubernetes.cri.sandbox-id":"56356373ae1f93505ccc4b0f458aa4218e5940c2400795bcf284fdd481382040","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-955523","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b1
3de00f364581310f0dfb4498c1e57d"},"owner":"root"},{"ociVersion":"1.2.1","id":"d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a","pid":1799,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a/rootfs","created":"2025-10-18T12:16:45.811752021Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20250512-df8de77b","io.kubernetes.cri.sandbox-id":"39f7db02a4760614685384efa2fb225ae7df235dee2c87232972b884334214ca","io.kubernetes.cri.sandbox-name":"kindnet-g62kl","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a29be4b4-2781-46a6-aa5b-f36f743f5429"},"owner":"root"},{"ociVersion":"1.2.1","id":"da2901032099d934196c576abc889fbb168121
b36a8a8790cb4dd857b472f99b","pid":1454,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b/rootfs","created":"2025-10-18T12:16:33.242324606Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.34.1","io.kubernetes.cri.sandbox-id":"0a85e917878570bd2d5b1b4bb21344473889e02e953a4029b315cc42cc8798ea","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-955523","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"a68158d63e23e7c33b28186c4b6ba260"},"owner":"root"},{"ociVersion":"1.2.1","id":"ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0","pid":1433,"status":"running","bundle":"/run/cont
ainerd/io.containerd.runtime.v2.task/k8s.io/ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0/rootfs","created":"2025-10-18T12:16:33.210322034Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.34.1","io.kubernetes.cri.sandbox-id":"5a40d8c62b89c07886e16e6176efeb265263fd4ef2e1157fe4d59b1a30f34e80","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-955523","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"d849b12f2ad950b4c2f6907605cf86e2"},"owner":"root"}]
	I1018 12:17:48.361352 2111595 cri.go:126] list returned 16 containers
	I1018 12:17:48.361360 2111595 cri.go:129] container: {ID:091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576 Status:running}
	I1018 12:17:48.361377 2111595 cri.go:135] skipping {091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576 running}: state = "running", want "paused"
	I1018 12:17:48.361385 2111595 cri.go:129] container: {ID:0a85e917878570bd2d5b1b4bb21344473889e02e953a4029b315cc42cc8798ea Status:running}
	I1018 12:17:48.361390 2111595 cri.go:131] skipping 0a85e917878570bd2d5b1b4bb21344473889e02e953a4029b315cc42cc8798ea - not in ps
	I1018 12:17:48.361394 2111595 cri.go:129] container: {ID:176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489 Status:running}
	I1018 12:17:48.361399 2111595 cri.go:135] skipping {176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489 running}: state = "running", want "paused"
	I1018 12:17:48.361402 2111595 cri.go:129] container: {ID:39f7db02a4760614685384efa2fb225ae7df235dee2c87232972b884334214ca Status:running}
	I1018 12:17:48.361407 2111595 cri.go:131] skipping 39f7db02a4760614685384efa2fb225ae7df235dee2c87232972b884334214ca - not in ps
	I1018 12:17:48.361409 2111595 cri.go:129] container: {ID:56356373ae1f93505ccc4b0f458aa4218e5940c2400795bcf284fdd481382040 Status:running}
	I1018 12:17:48.361414 2111595 cri.go:131] skipping 56356373ae1f93505ccc4b0f458aa4218e5940c2400795bcf284fdd481382040 - not in ps
	I1018 12:17:48.361416 2111595 cri.go:129] container: {ID:5a40d8c62b89c07886e16e6176efeb265263fd4ef2e1157fe4d59b1a30f34e80 Status:running}
	I1018 12:17:48.361421 2111595 cri.go:131] skipping 5a40d8c62b89c07886e16e6176efeb265263fd4ef2e1157fe4d59b1a30f34e80 - not in ps
	I1018 12:17:48.361424 2111595 cri.go:129] container: {ID:65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9 Status:running}
	I1018 12:17:48.361429 2111595 cri.go:135] skipping {65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9 running}: state = "running", want "paused"
	I1018 12:17:48.361433 2111595 cri.go:129] container: {ID:7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa Status:running}
	I1018 12:17:48.361438 2111595 cri.go:135] skipping {7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa running}: state = "running", want "paused"
	I1018 12:17:48.361441 2111595 cri.go:129] container: {ID:76b1c9184774993cc44f9091c6a0bcb84343ebc7e357142ed8358a84685c0b1b Status:running}
	I1018 12:17:48.361445 2111595 cri.go:131] skipping 76b1c9184774993cc44f9091c6a0bcb84343ebc7e357142ed8358a84685c0b1b - not in ps
	I1018 12:17:48.361448 2111595 cri.go:129] container: {ID:831fb163c52b3193711a6ac66322a75510da2ef2da59ec9a5c3d1d9d345a00a3 Status:running}
	I1018 12:17:48.361453 2111595 cri.go:131] skipping 831fb163c52b3193711a6ac66322a75510da2ef2da59ec9a5c3d1d9d345a00a3 - not in ps
	I1018 12:17:48.361455 2111595 cri.go:129] container: {ID:838d864c0cd2045367e59764bcc4315b92c3c97126c202a3ddf51aecb99e6ccd Status:running}
	I1018 12:17:48.361459 2111595 cri.go:131] skipping 838d864c0cd2045367e59764bcc4315b92c3c97126c202a3ddf51aecb99e6ccd - not in ps
	I1018 12:17:48.361462 2111595 cri.go:129] container: {ID:8fd44ed1df230f81f600c00a5875a2fdcc0156326ab03aded5d7f3c665f3cd80 Status:running}
	I1018 12:17:48.361465 2111595 cri.go:131] skipping 8fd44ed1df230f81f600c00a5875a2fdcc0156326ab03aded5d7f3c665f3cd80 - not in ps
	I1018 12:17:48.361468 2111595 cri.go:129] container: {ID:99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be Status:running}
	I1018 12:17:48.361473 2111595 cri.go:135] skipping {99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be running}: state = "running", want "paused"
	I1018 12:17:48.361477 2111595 cri.go:129] container: {ID:d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a Status:running}
	I1018 12:17:48.361482 2111595 cri.go:135] skipping {d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a running}: state = "running", want "paused"
	I1018 12:17:48.361485 2111595 cri.go:129] container: {ID:da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b Status:running}
	I1018 12:17:48.361489 2111595 cri.go:135] skipping {da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b running}: state = "running", want "paused"
	I1018 12:17:48.361492 2111595 cri.go:129] container: {ID:ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0 Status:running}
	I1018 12:17:48.361500 2111595 cri.go:135] skipping {ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0 running}: state = "running", want "paused"
	I1018 12:17:48.361559 2111595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 12:17:48.369125 2111595 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 12:17:48.369133 2111595 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 12:17:48.369183 2111595 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 12:17:48.376246 2111595 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:17:48.376747 2111595 kubeconfig.go:125] found "functional-955523" server: "https://192.168.49.2:8441"
	I1018 12:17:48.377944 2111595 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 12:17:48.385690 2111595 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-18 12:16:23.470181933 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-18 12:17:47.467946056 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1018 12:17:48.385699 2111595 kubeadm.go:1160] stopping kube-system containers ...
	I1018 12:17:48.385723 2111595 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1018 12:17:48.385795 2111595 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 12:17:48.416407 2111595 cri.go:89] found id: "176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489"
	I1018 12:17:48.416419 2111595 cri.go:89] found id: "7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa"
	I1018 12:17:48.416422 2111595 cri.go:89] found id: "65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9"
	I1018 12:17:48.416425 2111595 cri.go:89] found id: "d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a"
	I1018 12:17:48.416427 2111595 cri.go:89] found id: "ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0"
	I1018 12:17:48.416430 2111595 cri.go:89] found id: "da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b"
	I1018 12:17:48.416433 2111595 cri.go:89] found id: "99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be"
	I1018 12:17:48.416435 2111595 cri.go:89] found id: "091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576"
	I1018 12:17:48.416437 2111595 cri.go:89] found id: ""
	I1018 12:17:48.416441 2111595 cri.go:252] Stopping containers: [176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489 7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa 65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9 d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0 da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b 99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be 091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576]
	I1018 12:17:48.416494 2111595 ssh_runner.go:195] Run: which crictl
	I1018 12:17:48.420070 2111595 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489 7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa 65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9 d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0 da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b 99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be 091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576
	I1018 12:18:04.118811 2111595 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489 7510f517c2f3645d561d094dc25d6c2527bab676e30b04817ee8fb8fda8f73fa 65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9 d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0 da2901032099d934196c576abc889fbb168121b36a8a8790cb4dd857b472f99b 99214a24e5beb9597fcc144a8f6dee7c2f43da8a813bf10e560bd6579a7f99be 091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576: (15.698709231s)
	I1018 12:18:04.118872 2111595 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 12:18:04.220169 2111595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 12:18:04.227805 2111595 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct 18 12:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 18 12:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 18 12:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 18 12:16 /etc/kubernetes/scheduler.conf
	
	I1018 12:18:04.227891 2111595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1018 12:18:04.236024 2111595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1018 12:18:04.243310 2111595 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:18:04.243361 2111595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 12:18:04.250430 2111595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1018 12:18:04.257483 2111595 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:18:04.257533 2111595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 12:18:04.264743 2111595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1018 12:18:04.271968 2111595 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 12:18:04.272020 2111595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 12:18:04.279333 2111595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 12:18:04.287174 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:18:04.331202 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:18:08.271630 2111595 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.94040499s)
	I1018 12:18:08.271696 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:18:08.520459 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:18:08.586647 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:18:08.665378 2111595 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:18:08.665457 2111595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:09.165600 2111595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:09.665601 2111595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:09.687780 2111595 api_server.go:72] duration metric: took 1.02240797s to wait for apiserver process to appear ...
	I1018 12:18:09.687797 2111595 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:18:09.687827 2111595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 12:18:09.688368 2111595 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I1018 12:18:10.187896 2111595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 12:18:12.847741 2111595 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:18:12.847755 2111595 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:18:12.847768 2111595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 12:18:12.897511 2111595 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 12:18:12.897528 2111595 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 12:18:13.188866 2111595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 12:18:13.199231 2111595 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:13.199256 2111595 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:13.688890 2111595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 12:18:13.697232 2111595 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 12:18:13.697247 2111595 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 12:18:14.188446 2111595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 12:18:14.196556 2111595 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 12:18:14.210016 2111595 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:14.210035 2111595 api_server.go:131] duration metric: took 4.522233148s to wait for apiserver health ...
	I1018 12:18:14.210043 2111595 cni.go:84] Creating CNI manager for ""
	I1018 12:18:14.210048 2111595 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:18:14.213328 2111595 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 12:18:14.216242 2111595 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 12:18:14.220364 2111595 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 12:18:14.220372 2111595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 12:18:14.236428 2111595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 12:18:14.661132 2111595 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:14.665111 2111595 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:14.665130 2111595 system_pods.go:61] "coredns-66bc5c9577-jfd97" [293741f4-e73e-4dfc-be4f-6f8965d37fef] Running
	I1018 12:18:14.665139 2111595 system_pods.go:61] "etcd-functional-955523" [acf4eb08-0dbb-4f40-b6eb-3cf11aad8bd2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:14.665144 2111595 system_pods.go:61] "kindnet-g62kl" [a29be4b4-2781-46a6-aa5b-f36f743f5429] Running
	I1018 12:18:14.665152 2111595 system_pods.go:61] "kube-apiserver-functional-955523" [188d0ad8-ba0a-4cf4-8802-fc8fff0c2f57] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:14.665158 2111595 system_pods.go:61] "kube-controller-manager-functional-955523" [8c8bde3e-d297-4f21-acdb-55ff7b03dcc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:14.665162 2111595 system_pods.go:61] "kube-proxy-wp97m" [dcb400bb-8e4b-421b-b013-671fef2cf3b8] Running
	I1018 12:18:14.665168 2111595 system_pods.go:61] "kube-scheduler-functional-955523" [b3404f27-e5d3-4cf7-9c9b-13d2627de0ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:14.665173 2111595 system_pods.go:61] "storage-provisioner" [252306de-14d4-42cd-92fd-546202cc84dd] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 12:18:14.665179 2111595 system_pods.go:74] duration metric: took 4.033345ms to wait for pod list to return data ...
	I1018 12:18:14.665185 2111595 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:14.667947 2111595 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:18:14.667966 2111595 node_conditions.go:123] node cpu capacity is 2
	I1018 12:18:14.667976 2111595 node_conditions.go:105] duration metric: took 2.78761ms to run NodePressure ...
	I1018 12:18:14.668034 2111595 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 12:18:14.953230 2111595 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 12:18:14.957104 2111595 kubeadm.go:743] kubelet initialised
	I1018 12:18:14.957114 2111595 kubeadm.go:744] duration metric: took 3.871971ms waiting for restarted kubelet to initialise ...
	I1018 12:18:14.957128 2111595 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 12:18:14.967946 2111595 ops.go:34] apiserver oom_adj: -16
	I1018 12:18:14.967957 2111595 kubeadm.go:601] duration metric: took 26.598818581s to restartPrimaryControlPlane
	I1018 12:18:14.967965 2111595 kubeadm.go:402] duration metric: took 26.665154091s to StartCluster
	I1018 12:18:14.967993 2111595 settings.go:142] acquiring lock: {Name:mkfe09c4f932c229739f9b782a8232962c7d94cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:14.968051 2111595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:18:14.968714 2111595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/kubeconfig: {Name:mkb34a50149724994ca0c2a0fd8679c156671366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:18:14.968937 2111595 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1018 12:18:14.969179 2111595 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:18:14.969211 2111595 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 12:18:14.969263 2111595 addons.go:69] Setting storage-provisioner=true in profile "functional-955523"
	I1018 12:18:14.969270 2111595 addons.go:69] Setting default-storageclass=true in profile "functional-955523"
	I1018 12:18:14.969277 2111595 addons.go:238] Setting addon storage-provisioner=true in "functional-955523"
	W1018 12:18:14.969282 2111595 addons.go:247] addon storage-provisioner should already be in state true
	I1018 12:18:14.969285 2111595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-955523"
	I1018 12:18:14.969311 2111595 host.go:66] Checking if "functional-955523" exists ...
	I1018 12:18:14.969584 2111595 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
	I1018 12:18:14.969757 2111595 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
	I1018 12:18:14.972650 2111595 out.go:179] * Verifying Kubernetes components...
	I1018 12:18:14.976127 2111595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 12:18:15.006805 2111595 addons.go:238] Setting addon default-storageclass=true in "functional-955523"
	W1018 12:18:15.006819 2111595 addons.go:247] addon default-storageclass should already be in state true
	I1018 12:18:15.006845 2111595 host.go:66] Checking if "functional-955523" exists ...
	I1018 12:18:15.007299 2111595 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
	I1018 12:18:15.007574 2111595 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 12:18:15.014022 2111595 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:15.014035 2111595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 12:18:15.014113 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:18:15.037734 2111595 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:15.037746 2111595 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 12:18:15.037817 2111595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
	I1018 12:18:15.065815 2111595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
	I1018 12:18:15.073478 2111595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
	I1018 12:18:15.216820 2111595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 12:18:15.229041 2111595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 12:18:15.293892 2111595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 12:18:16.007811 2111595 node_ready.go:35] waiting up to 6m0s for node "functional-955523" to be "Ready" ...
	I1018 12:18:16.011371 2111595 node_ready.go:49] node "functional-955523" is "Ready"
	I1018 12:18:16.011386 2111595 node_ready.go:38] duration metric: took 3.549132ms for node "functional-955523" to be "Ready" ...
	I1018 12:18:16.011398 2111595 api_server.go:52] waiting for apiserver process to appear ...
	I1018 12:18:16.011465 2111595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:18:16.019730 2111595 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 12:18:16.022531 2111595 addons.go:514] duration metric: took 1.053297457s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 12:18:16.026362 2111595 api_server.go:72] duration metric: took 1.057400724s to wait for apiserver process to appear ...
	I1018 12:18:16.026374 2111595 api_server.go:88] waiting for apiserver healthz status ...
	I1018 12:18:16.026393 2111595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 12:18:16.035921 2111595 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 12:18:16.037019 2111595 api_server.go:141] control plane version: v1.34.1
	I1018 12:18:16.037031 2111595 api_server.go:131] duration metric: took 10.651596ms to wait for apiserver health ...
	I1018 12:18:16.037038 2111595 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 12:18:16.039993 2111595 system_pods.go:59] 8 kube-system pods found
	I1018 12:18:16.040006 2111595 system_pods.go:61] "coredns-66bc5c9577-jfd97" [293741f4-e73e-4dfc-be4f-6f8965d37fef] Running
	I1018 12:18:16.040014 2111595 system_pods.go:61] "etcd-functional-955523" [acf4eb08-0dbb-4f40-b6eb-3cf11aad8bd2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:16.040018 2111595 system_pods.go:61] "kindnet-g62kl" [a29be4b4-2781-46a6-aa5b-f36f743f5429] Running
	I1018 12:18:16.040026 2111595 system_pods.go:61] "kube-apiserver-functional-955523" [188d0ad8-ba0a-4cf4-8802-fc8fff0c2f57] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:16.040032 2111595 system_pods.go:61] "kube-controller-manager-functional-955523" [8c8bde3e-d297-4f21-acdb-55ff7b03dcc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:16.040036 2111595 system_pods.go:61] "kube-proxy-wp97m" [dcb400bb-8e4b-421b-b013-671fef2cf3b8] Running
	I1018 12:18:16.040041 2111595 system_pods.go:61] "kube-scheduler-functional-955523" [b3404f27-e5d3-4cf7-9c9b-13d2627de0ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:16.040045 2111595 system_pods.go:61] "storage-provisioner" [252306de-14d4-42cd-92fd-546202cc84dd] Running
	I1018 12:18:16.040049 2111595 system_pods.go:74] duration metric: took 3.006549ms to wait for pod list to return data ...
	I1018 12:18:16.040056 2111595 default_sa.go:34] waiting for default service account to be created ...
	I1018 12:18:16.042374 2111595 default_sa.go:45] found service account: "default"
	I1018 12:18:16.042386 2111595 default_sa.go:55] duration metric: took 2.325239ms for default service account to be created ...
	I1018 12:18:16.042393 2111595 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 12:18:16.049601 2111595 system_pods.go:86] 8 kube-system pods found
	I1018 12:18:16.049617 2111595 system_pods.go:89] "coredns-66bc5c9577-jfd97" [293741f4-e73e-4dfc-be4f-6f8965d37fef] Running
	I1018 12:18:16.049636 2111595 system_pods.go:89] "etcd-functional-955523" [acf4eb08-0dbb-4f40-b6eb-3cf11aad8bd2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 12:18:16.049641 2111595 system_pods.go:89] "kindnet-g62kl" [a29be4b4-2781-46a6-aa5b-f36f743f5429] Running
	I1018 12:18:16.049648 2111595 system_pods.go:89] "kube-apiserver-functional-955523" [188d0ad8-ba0a-4cf4-8802-fc8fff0c2f57] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 12:18:16.049653 2111595 system_pods.go:89] "kube-controller-manager-functional-955523" [8c8bde3e-d297-4f21-acdb-55ff7b03dcc2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 12:18:16.049657 2111595 system_pods.go:89] "kube-proxy-wp97m" [dcb400bb-8e4b-421b-b013-671fef2cf3b8] Running
	I1018 12:18:16.049662 2111595 system_pods.go:89] "kube-scheduler-functional-955523" [b3404f27-e5d3-4cf7-9c9b-13d2627de0ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 12:18:16.049665 2111595 system_pods.go:89] "storage-provisioner" [252306de-14d4-42cd-92fd-546202cc84dd] Running
	I1018 12:18:16.049673 2111595 system_pods.go:126] duration metric: took 7.273709ms to wait for k8s-apps to be running ...
	I1018 12:18:16.049680 2111595 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 12:18:16.049752 2111595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:18:16.063406 2111595 system_svc.go:56] duration metric: took 13.715736ms WaitForService to wait for kubelet
	I1018 12:18:16.063438 2111595 kubeadm.go:586] duration metric: took 1.09448007s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 12:18:16.063455 2111595 node_conditions.go:102] verifying NodePressure condition ...
	I1018 12:18:16.066092 2111595 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 12:18:16.066107 2111595 node_conditions.go:123] node cpu capacity is 2
	I1018 12:18:16.066128 2111595 node_conditions.go:105] duration metric: took 2.66913ms to run NodePressure ...
	I1018 12:18:16.066140 2111595 start.go:241] waiting for startup goroutines ...
	I1018 12:18:16.066146 2111595 start.go:246] waiting for cluster config update ...
	I1018 12:18:16.066156 2111595 start.go:255] writing updated cluster config ...
	I1018 12:18:16.066446 2111595 ssh_runner.go:195] Run: rm -f paused
	I1018 12:18:16.070112 2111595 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:16.073670 2111595 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jfd97" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:16.078414 2111595 pod_ready.go:94] pod "coredns-66bc5c9577-jfd97" is "Ready"
	I1018 12:18:16.078428 2111595 pod_ready.go:86] duration metric: took 4.744054ms for pod "coredns-66bc5c9577-jfd97" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:16.081204 2111595 pod_ready.go:83] waiting for pod "etcd-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 12:18:18.087000 2111595 pod_ready.go:104] pod "etcd-functional-955523" is not "Ready", error: <nil>
	W1018 12:18:20.087870 2111595 pod_ready.go:104] pod "etcd-functional-955523" is not "Ready", error: <nil>
	W1018 12:18:22.586743 2111595 pod_ready.go:104] pod "etcd-functional-955523" is not "Ready", error: <nil>
	I1018 12:18:24.087644 2111595 pod_ready.go:94] pod "etcd-functional-955523" is "Ready"
	I1018 12:18:24.087658 2111595 pod_ready.go:86] duration metric: took 8.006441861s for pod "etcd-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.090172 2111595 pod_ready.go:83] waiting for pod "kube-apiserver-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.095360 2111595 pod_ready.go:94] pod "kube-apiserver-functional-955523" is "Ready"
	I1018 12:18:24.095378 2111595 pod_ready.go:86] duration metric: took 5.189293ms for pod "kube-apiserver-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.098056 2111595 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.103112 2111595 pod_ready.go:94] pod "kube-controller-manager-functional-955523" is "Ready"
	I1018 12:18:24.103134 2111595 pod_ready.go:86] duration metric: took 5.065366ms for pod "kube-controller-manager-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.105810 2111595 pod_ready.go:83] waiting for pod "kube-proxy-wp97m" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.284164 2111595 pod_ready.go:94] pod "kube-proxy-wp97m" is "Ready"
	I1018 12:18:24.284179 2111595 pod_ready.go:86] duration metric: took 178.3562ms for pod "kube-proxy-wp97m" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.484742 2111595 pod_ready.go:83] waiting for pod "kube-scheduler-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.884810 2111595 pod_ready.go:94] pod "kube-scheduler-functional-955523" is "Ready"
	I1018 12:18:24.884824 2111595 pod_ready.go:86] duration metric: took 400.069305ms for pod "kube-scheduler-functional-955523" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 12:18:24.884834 2111595 pod_ready.go:40] duration metric: took 8.814702583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 12:18:24.941240 2111595 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 12:18:24.944475 2111595 out.go:179] * Done! kubectl is now configured to use "functional-955523" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e7653bc9c7bce       ba04bb24b9575       10 minutes ago      Running             storage-provisioner       2                   8fd44ed1df230       storage-provisioner                         kube-system
	5b3b426b0241c       43911e833d64d       10 minutes ago      Running             kube-apiserver            0                   94a8c004f82cd       kube-apiserver-functional-955523            kube-system
	a0c96f46d08ea       7eb2c6ff0c5a7       10 minutes ago      Running             kube-controller-manager   2                   0a85e91787857       kube-controller-manager-functional-955523   kube-system
	3d6e189351ed9       a1894772a478e       10 minutes ago      Running             etcd                      1                   831fb163c52b3       etcd-functional-955523                      kube-system
	fe29506973495       ba04bb24b9575       10 minutes ago      Exited              storage-provisioner       1                   8fd44ed1df230       storage-provisioner                         kube-system
	9768e1a243da3       b1a8c6f707935       10 minutes ago      Running             kindnet-cni               1                   39f7db02a4760       kindnet-g62kl                               kube-system
	1fee019c9744e       05baa95f5142d       10 minutes ago      Running             kube-proxy                1                   76b1c91847749       kube-proxy-wp97m                            kube-system
	49bcc42178cf4       7eb2c6ff0c5a7       10 minutes ago      Exited              kube-controller-manager   1                   0a85e91787857       kube-controller-manager-functional-955523   kube-system
	42cfd21f04099       b5f57ec6b9867       10 minutes ago      Running             kube-scheduler            1                   5a40d8c62b89c       kube-scheduler-functional-955523            kube-system
	482c932303b97       138784d87c9c5       10 minutes ago      Running             coredns                   1                   838d864c0cd20       coredns-66bc5c9577-jfd97                    kube-system
	176afc34450ef       138784d87c9c5       11 minutes ago      Exited              coredns                   0                   838d864c0cd20       coredns-66bc5c9577-jfd97                    kube-system
	65c34c830786f       05baa95f5142d       11 minutes ago      Exited              kube-proxy                0                   76b1c91847749       kube-proxy-wp97m                            kube-system
	d6f640024b52e       b1a8c6f707935       11 minutes ago      Exited              kindnet-cni               0                   39f7db02a4760       kindnet-g62kl                               kube-system
	ffdc1092e749b       b5f57ec6b9867       12 minutes ago      Exited              kube-scheduler            0                   5a40d8c62b89c       kube-scheduler-functional-955523            kube-system
	091437cb53c82       a1894772a478e       12 minutes ago      Exited              etcd                      0                   831fb163c52b3       etcd-functional-955523                      kube-system
	
	
	==> containerd <==
	Oct 18 12:24:42 functional-955523 containerd[3606]: time="2025-10-18T12:24:42.613127994Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:24:42 functional-955523 containerd[3606]: time="2025-10-18T12:24:42.739803388Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:24:43 functional-955523 containerd[3606]: time="2025-10-18T12:24:43.021795051Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:24:43 functional-955523 containerd[3606]: time="2025-10-18T12:24:43.021833105Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 18 12:24:57 functional-955523 containerd[3606]: time="2025-10-18T12:24:57.689810870Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 18 12:24:57 functional-955523 containerd[3606]: time="2025-10-18T12:24:57.692207064Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:24:57 functional-955523 containerd[3606]: time="2025-10-18T12:24:57.823063406Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:24:58 functional-955523 containerd[3606]: time="2025-10-18T12:24:58.083278327Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:24:58 functional-955523 containerd[3606]: time="2025-10-18T12:24:58.083386910Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 18 12:25:25 functional-955523 containerd[3606]: time="2025-10-18T12:25:25.688784884Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 18 12:25:25 functional-955523 containerd[3606]: time="2025-10-18T12:25:25.691242868Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:25:25 functional-955523 containerd[3606]: time="2025-10-18T12:25:25.818248974Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:25:26 functional-955523 containerd[3606]: time="2025-10-18T12:25:26.104378658Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:25:26 functional-955523 containerd[3606]: time="2025-10-18T12:25:26.104483738Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 18 12:26:06 functional-955523 containerd[3606]: time="2025-10-18T12:26:06.692467815Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 18 12:26:06 functional-955523 containerd[3606]: time="2025-10-18T12:26:06.694860636Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:26:06 functional-955523 containerd[3606]: time="2025-10-18T12:26:06.807616410Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:26:07 functional-955523 containerd[3606]: time="2025-10-18T12:26:07.182262815Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:26:07 functional-955523 containerd[3606]: time="2025-10-18T12:26:07.182286839Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=21196"
	Oct 18 12:27:32 functional-955523 containerd[3606]: time="2025-10-18T12:27:32.689041284Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Oct 18 12:27:32 functional-955523 containerd[3606]: time="2025-10-18T12:27:32.691467351Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:27:32 functional-955523 containerd[3606]: time="2025-10-18T12:27:32.832180342Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Oct 18 12:27:33 functional-955523 containerd[3606]: time="2025-10-18T12:27:33.121972078Z" level=error msg="PullImage \"docker.io/nginx:latest\" failed" error="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Oct 18 12:27:33 functional-955523 containerd[3606]: time="2025-10-18T12:27:33.122244694Z" level=info msg="stop pulling image docker.io/library/nginx:latest: active requests=0, bytes read=10967"
	Oct 18 12:28:44 functional-955523 containerd[3606]: time="2025-10-18T12:28:44.235149440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-486zs,Uid:a5b67887-0934-4170-9202-17973ef3bc1b,Namespace:default,Attempt:0,}"
	
	
	==> coredns [176afc34450efcb861de660c1d660228a48104355b0efc6167b91e80ad5bd489] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55783 - 13646 "HINFO IN 2965834670196057044.7525666574534054471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02149907s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [482c932303b97dc57ee5c86e642c514752840fdd30f0d7f8d0538d0e0ef2de95] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50117 - 8560 "HINFO IN 9133458471581067057.4151858571828751761. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03122877s
	
	
	==> describe nodes <==
	Name:               functional-955523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-955523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a5d4c9cccb1ce5842ff2f1e7c0db9c10e4246ee
	                    minikube.k8s.io/name=functional-955523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T12_16_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 12:16:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-955523
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 12:28:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 12:27:02 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 12:27:02 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 12:27:02 +0000   Sat, 18 Oct 2025 12:16:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 12:27:02 +0000   Sat, 18 Oct 2025 12:17:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-955523
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                68d1fac0-3a30-4775-ac10-1725872276da
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.28
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-sgbjm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-486zs          0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 coredns-66bc5c9577-jfd97                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-functional-955523                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-g62kl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-955523             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-955523    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wp97m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-955523             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-955523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-955523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-955523 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-955523 event: Registered Node functional-955523 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-955523 status is now: NodeReady
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-955523 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-955523 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-955523 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node functional-955523 event: Registered Node functional-955523 in Controller
	
	
	==> dmesg <==
	[Oct18 11:37] overlayfs: idmapped layers are currently not supported
	[Oct18 11:38] overlayfs: idmapped layers are currently not supported
	[Oct18 11:40] overlayfs: idmapped layers are currently not supported
	[Oct18 11:42] overlayfs: idmapped layers are currently not supported
	[Oct18 11:43] overlayfs: idmapped layers are currently not supported
	[ +44.292171] overlayfs: idmapped layers are currently not supported
	[  +9.552091] overlayfs: idmapped layers are currently not supported
	[Oct18 11:44] overlayfs: idmapped layers are currently not supported
	[Oct18 11:45] overlayfs: idmapped layers are currently not supported
	[Oct18 11:47] overlayfs: idmapped layers are currently not supported
	[ +55.826989] overlayfs: idmapped layers are currently not supported
	[Oct18 11:48] overlayfs: idmapped layers are currently not supported
	[Oct18 11:49] overlayfs: idmapped layers are currently not supported
	[Oct18 11:50] overlayfs: idmapped layers are currently not supported
	[Oct18 11:51] overlayfs: idmapped layers are currently not supported
	[ +25.885672] overlayfs: idmapped layers are currently not supported
	[ +14.381354] overlayfs: idmapped layers are currently not supported
	[Oct18 11:52] overlayfs: idmapped layers are currently not supported
	[Oct18 11:53] overlayfs: idmapped layers are currently not supported
	[Oct18 11:54] overlayfs: idmapped layers are currently not supported
	[Oct18 11:55] overlayfs: idmapped layers are currently not supported
	[ +48.139503] overlayfs: idmapped layers are currently not supported
	[Oct18 11:56] overlayfs: idmapped layers are currently not supported
	[Oct18 11:58] overlayfs: idmapped layers are currently not supported
	[Oct18 12:00] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [091437cb53c8208a269d6f9eb67c48d017cbd39de23f6d0c769653bdcf8b8576] <==
	{"level":"warn","ts":"2025-10-18T12:16:35.853241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.871922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.892443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.920122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.933591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:35.966622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:16:36.067935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36508","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:18:04.063823Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T12:18:04.064061Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-955523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T12:18:04.064301Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:18:04.065014Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T12:18:04.065054Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.065073Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T12:18:04.065153Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T12:18:04.065165Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065413Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065450Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:18:04.065458Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065538Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T12:18:04.065566Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T12:18:04.065575Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.068417Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T12:18:04.068507Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T12:18:04.068547Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T12:18:04.068554Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-955523","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [3d6e189351ed915d09d07de968fbf97a1f10801b7148a5315c28032aa8ee2b6c] <==
	{"level":"warn","ts":"2025-10-18T12:18:11.834425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.857068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.873256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.899253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.913699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.942586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.971661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:11.988762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.011419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.023564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.040039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.056960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.072777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.088231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.104610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.118084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.133553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.155135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.182136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.197063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.211196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T12:18:12.286942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T12:28:10.805926Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":950}
	{"level":"info","ts":"2025-10-18T12:28:10.814525Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":950,"took":"8.333821ms","hash":3809581160,"current-db-size-bytes":2961408,"current-db-size":"3.0 MB","current-db-size-in-use-bytes":2961408,"current-db-size-in-use":"3.0 MB"}
	{"level":"info","ts":"2025-10-18T12:28:10.814577Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3809581160,"revision":950,"compact-revision":-1}
	
	
	==> kernel <==
	 12:28:44 up 14:11,  0 user,  load average: 1.43, 0.54, 1.02
	Linux functional-955523 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9768e1a243da307a5e9d75450f025a8932218255b9e0c16d4a6eb1ad3271fff8] <==
	I1018 12:26:35.261519       1 main.go:301] handling current node
	I1018 12:26:45.264304       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:26:45.264352       1 main.go:301] handling current node
	I1018 12:26:55.261038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:26:55.261273       1 main.go:301] handling current node
	I1018 12:27:05.261424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:27:05.261455       1 main.go:301] handling current node
	I1018 12:27:15.261437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:27:15.261470       1 main.go:301] handling current node
	I1018 12:27:25.269037       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:27:25.269076       1 main.go:301] handling current node
	I1018 12:27:35.261460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:27:35.261521       1 main.go:301] handling current node
	I1018 12:27:45.262791       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:27:45.262847       1 main.go:301] handling current node
	I1018 12:27:55.267199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:27:55.267392       1 main.go:301] handling current node
	I1018 12:28:05.261533       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:28:05.261602       1 main.go:301] handling current node
	I1018 12:28:15.261231       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:28:15.261263       1 main.go:301] handling current node
	I1018 12:28:25.261443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:28:25.261477       1 main.go:301] handling current node
	I1018 12:28:35.264115       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:28:35.264154       1 main.go:301] handling current node
	
	
	==> kindnet [d6f640024b52e52344f6d4beea3692e49f480ff306b37e7e7949cd611eb5733a] <==
	I1018 12:16:46.010069       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 12:16:46.010337       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 12:16:46.010476       1 main.go:148] setting mtu 1500 for CNI 
	I1018 12:16:46.010489       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 12:16:46.010503       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T12:16:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 12:16:46.212543       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 12:16:46.212714       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 12:16:46.212789       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 12:16:46.213680       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 12:17:16.213129       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 12:17:16.213295       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 12:17:16.213395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 12:17:16.214106       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 12:17:17.713809       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 12:17:17.713912       1 metrics.go:72] Registering metrics
	I1018 12:17:17.714060       1 controller.go:711] "Syncing nftables rules"
	I1018 12:17:26.215955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:26.215997       1 main.go:301] handling current node
	I1018 12:17:36.213540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:36.213644       1 main.go:301] handling current node
	I1018 12:17:46.216720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 12:17:46.216758       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5b3b426b0241c3bc68a439120feb2f099fa5671ef78cf372487f7863c3e46bb6] <==
	I1018 12:18:12.990586       1 cache.go:39] Caches are synced for autoregister controller
	I1018 12:18:12.993243       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:18:13.010161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 12:18:13.026661       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 12:18:13.027920       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 12:18:13.029678       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 12:18:13.029885       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 12:18:13.037447       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 12:18:13.029911       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 12:18:13.030025       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 12:18:13.759347       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 12:18:13.807791       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 12:18:14.177814       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 12:18:14.179111       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 12:18:14.184235       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 12:18:14.653695       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 12:18:14.818309       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 12:18:14.897779       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 12:18:14.906578       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 12:18:16.498535       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 12:18:28.340223       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.170.55"}
	I1018 12:18:37.935381       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.114.85"}
	I1018 12:18:42.802569       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.46.35"}
	I1018 12:28:12.945550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 12:28:44.011433       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.178.61"}
	
	
	==> kube-controller-manager [49bcc42178cf4017980732207b69c732f49dbb0e1d3cb2a5b51aeda669460337] <==
	I1018 12:17:56.138096       1 serving.go:386] Generated self-signed cert in-memory
	I1018 12:17:58.110568       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1018 12:17:58.110607       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:17:58.112555       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1018 12:17:58.112780       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1018 12:17:58.113192       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1018 12:17:58.113348       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 12:18:08.114886       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [a0c96f46d08ea5d6f2ae6eea6e32f62be57d0879bb28524e38800702fc8a9a34] <==
	I1018 12:18:16.316745       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 12:18:16.321010       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 12:18:16.321023       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 12:18:16.321041       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 12:18:16.323177       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 12:18:16.326401       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 12:18:16.328678       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 12:18:16.328853       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 12:18:16.333082       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 12:18:16.339780       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 12:18:16.340046       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 12:18:16.340127       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 12:18:16.340446       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:16.340466       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 12:18:16.340474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 12:18:16.342596       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 12:18:16.342636       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 12:18:16.342679       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 12:18:16.342742       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 12:18:16.343729       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 12:18:16.351676       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 12:18:16.355886       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 12:18:16.356056       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 12:18:16.356211       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-955523"
	I1018 12:18:16.356318       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [1fee019c9744e93950b4d8d93cb88fa80e7fe6aaab1f11c1690f707230b350e4] <==
	I1018 12:17:57.461604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 12:17:57.463297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:17:58.974912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:02.161409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:05.813191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-955523&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 12:18:17.561880       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:18:17.562074       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:18:17.562215       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:18:17.583108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:18:17.583220       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:18:17.588971       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:18:17.589380       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:18:17.589440       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:18:17.591801       1 config.go:200] "Starting service config controller"
	I1018 12:18:17.591824       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:18:17.591969       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:18:17.591982       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:18:17.592063       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:18:17.592142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:18:17.594131       1 config.go:309] "Starting node config controller"
	I1018 12:18:17.594383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:18:17.594488       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:18:17.692212       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:18:17.692214       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:18:17.692252       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [65c34c830786f317cc2cdced96beedd1d5f9539253820d652ecb98011884fac9] <==
	I1018 12:16:45.970560       1 server_linux.go:53] "Using iptables proxy"
	I1018 12:16:46.054142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 12:16:46.161281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 12:16:46.161456       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 12:16:46.161748       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 12:16:46.207415       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 12:16:46.207635       1 server_linux.go:132] "Using iptables Proxier"
	I1018 12:16:46.216525       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 12:16:46.217577       1 server.go:527] "Version info" version="v1.34.1"
	I1018 12:16:46.217735       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 12:16:46.226110       1 config.go:106] "Starting endpoint slice config controller"
	I1018 12:16:46.226290       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 12:16:46.226688       1 config.go:200] "Starting service config controller"
	I1018 12:16:46.226786       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 12:16:46.227207       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 12:16:46.227310       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 12:16:46.227905       1 config.go:309] "Starting node config controller"
	I1018 12:16:46.228051       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 12:16:46.228141       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 12:16:46.326851       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 12:16:46.326923       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 12:16:46.327577       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [42cfd21f04099178773cb63ede1529b3067d261e15361d79ce7607d398c1864c] <==
	E1018 12:18:01.438340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:01.587366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:01.724385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:01.791336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:01.913550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:18:04.626394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:18:05.022677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 12:18:05.148542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:18:05.232126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:18:05.568461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:18:05.600149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:18:05.707207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 12:18:05.755268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 12:18:06.011658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:18:06.575506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:18:06.743469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:18:06.897246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:18:06.963244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 12:18:07.045131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:18:07.373861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:18:07.634239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:18:07.654726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:18:07.773832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:18:07.853320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 12:18:15.930104       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ffdc1092e749b4668b8ea43725493950b34be136dea12e415cbf3422cb2db3b0] <==
	E1018 12:16:37.201070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:37.201127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 12:16:37.201172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:37.201206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:37.201238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 12:16:37.201274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 12:16:37.201312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 12:16:37.201354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 12:16:37.201388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 12:16:37.201418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 12:16:37.201456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 12:16:38.021893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 12:16:38.079769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 12:16:38.083144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 12:16:38.101016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 12:16:38.105387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 12:16:38.106576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 12:16:38.116416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 12:16:38.152432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1018 12:16:40.773457       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 12:17:53.851512       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 12:17:53.851621       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 12:17:53.851632       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 12:17:53.851667       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 12:17:53.851682       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 12:27:33 functional-955523 kubelet[4605]: E1018 12:27:33.122272    4605 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Oct 18 12:27:33 functional-955523 kubelet[4605]: E1018 12:27:33.122349    4605 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(d355d915-4156-4d0d-b780-f3f53fb401a3): ErrImagePull: failed to pull and unpack image \"docker.io/library/nginx:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:27:33 functional-955523 kubelet[4605]: E1018 12:27:33.122389    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:27:33 functional-955523 kubelet[4605]: E1018 12:27:33.689176    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:27:39 functional-955523 kubelet[4605]: E1018 12:27:39.689953    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:27:44 functional-955523 kubelet[4605]: E1018 12:27:44.688559    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:27:47 functional-955523 kubelet[4605]: E1018 12:27:47.688828    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:27:52 functional-955523 kubelet[4605]: E1018 12:27:52.689522    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:27:57 functional-955523 kubelet[4605]: E1018 12:27:57.689107    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:27:59 functional-955523 kubelet[4605]: E1018 12:27:59.689078    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:28:05 functional-955523 kubelet[4605]: E1018 12:28:05.689211    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:28:09 functional-955523 kubelet[4605]: E1018 12:28:09.689219    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:28:10 functional-955523 kubelet[4605]: E1018 12:28:10.689763    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:28:19 functional-955523 kubelet[4605]: E1018 12:28:19.689835    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:28:20 functional-955523 kubelet[4605]: E1018 12:28:20.689313    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:28:23 functional-955523 kubelet[4605]: E1018 12:28:23.688604    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:28:30 functional-955523 kubelet[4605]: E1018 12:28:30.691953    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:28:33 functional-955523 kubelet[4605]: E1018 12:28:33.689441    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-75c85bcc94-sgbjm" podUID="468c9b98-b12b-47b2-b2ce-5096c29aa92d"
	Oct 18 12:28:37 functional-955523 kubelet[4605]: E1018 12:28:37.689217    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="d355d915-4156-4d0d-b780-f3f53fb401a3"
	Oct 18 12:28:42 functional-955523 kubelet[4605]: E1018 12:28:42.699042    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: failed to pull and unpack image \\\"docker.io/library/nginx:alpine\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx-svc" podUID="fffe085f-744c-4c3f-8fb5-42cdb69bb49d"
	Oct 18 12:28:43 functional-955523 kubelet[4605]: I1018 12:28:43.985204    4605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49nxk\" (UniqueName: \"kubernetes.io/projected/a5b67887-0934-4170-9202-17973ef3bc1b-kube-api-access-49nxk\") pod \"hello-node-connect-7d85dfc575-486zs\" (UID: \"a5b67887-0934-4170-9202-17973ef3bc1b\") " pod="default/hello-node-connect-7d85dfc575-486zs"
	Oct 18 12:28:44 functional-955523 kubelet[4605]: E1018 12:28:44.889440    4605 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 18 12:28:44 functional-955523 kubelet[4605]: E1018 12:28:44.889509    4605 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Oct 18 12:28:44 functional-955523 kubelet[4605]: E1018 12:28:44.889597    4605 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-486zs_default(a5b67887-0934-4170-9202-17973ef3bc1b): ErrImagePull: failed to pull and unpack image \"docker.io/kicbase/echo-server:latest\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Oct 18 12:28:44 functional-955523 kubelet[4605]: E1018 12:28:44.889652    4605 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"failed to pull and unpack image \\\"docker.io/kicbase/echo-server:latest\\\": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-486zs" podUID="a5b67887-0934-4170-9202-17973ef3bc1b"
	
	
	==> storage-provisioner [e7653bc9c7bce0429bf09242499cd82ae98695c13464ab2a3a7fdd178f7f0e1e] <==
	W1018 12:28:20.377065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:22.379614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:22.386094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:24.388830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:24.395973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:26.399249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:26.403593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:28.407382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:28.413904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:30.417630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:30.422292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:32.425500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:32.430020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:34.433171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:34.439784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:36.443495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:36.447818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:38.452011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:38.459051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:40.462804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:40.470436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:42.474139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:42.481069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:44.483883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 12:28:44.491924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fe295069734951076b4f9abc072eedf18f67c0e70fdc6189c67ca72bb4c271d6] <==
	I1018 12:17:54.879457       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 12:17:54.890543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-955523 -n functional-955523
helpers_test.go:269: (dbg) Run:  kubectl --context functional-955523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-955523 describe pod hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-955523 describe pod hello-node-75c85bcc94-sgbjm hello-node-connect-7d85dfc575-486zs nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-sgbjm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:18:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grlqr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-grlqr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sgbjm to functional-955523
	  Warning  Failed     10m                    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m13s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m12s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m12s (x4 over 9m55s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     4m59s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m48s (x21 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-486zs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:28:43 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49nxk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-49nxk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age   From               Message
	  ----     ------     ----  ----               -------
	  Normal   Scheduled  2s    default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-486zs to functional-955523
	  Normal   Pulling    1s    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     1s    kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     1s    kubelet            Error: ErrImagePull
	  Normal   BackOff    0s    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s    kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:18:42 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jtx6g (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-jtx6g:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/nginx-svc to functional-955523
	  Warning  Failed     8m42s                 kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m10s (x5 over 10m)   kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     7m9s (x4 over 10m)    kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-955523/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 12:24:42 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsxg9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-wsxg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  4m3s                default-scheduler  Successfully assigned default/sp-pod to functional-955523
	  Warning  Failed     2m38s               kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    73s (x5 over 4m3s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     72s (x4 over 4m2s)  kubelet            Failed to pull image "docker.io/nginx": failed to pull and unpack image "docker.io/library/nginx:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     72s (x5 over 4m2s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    8s (x15 over 4m2s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     8s (x15 over 4m2s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (249.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-955523 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-955523 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-sgbjm" [468c9b98-b12b-47b2-b2ce-5096c29aa92d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-955523 -n functional-955523
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 12:28:38.311445733 +0000 UTC m=+1677.569083874
functional_test.go:1460: (dbg) Run:  kubectl --context functional-955523 describe po hello-node-75c85bcc94-sgbjm -n default
functional_test.go:1460: (dbg) kubectl --context functional-955523 describe po hello-node-75c85bcc94-sgbjm -n default:
Name:             hello-node-75c85bcc94-sgbjm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-955523/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:18:37 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-grlqr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-grlqr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-sgbjm to functional-955523
Warning  Failed     10m                   kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     7m5s (x4 over 9m48s)  kubelet            Failed to pull image "kicbase/echo-server": failed to pull and unpack image "docker.io/kicbase/echo-server:latest": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-955523 logs hello-node-75c85bcc94-sgbjm -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-955523 logs hello-node-75c85bcc94-sgbjm -n default: exit status 1 (102.885126ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-sgbjm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-955523 logs hello-node-75c85bcc94-sgbjm -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-955523 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [fffe085f-744c-4c3f-8fb5-42cdb69bb49d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1018 12:18:46.999824 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:47.006577 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:47.017961 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:47.039454 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:47.080847 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:47.162291 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:47.323799 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:47.645451 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:48.287065 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:49.569326 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:52.131427 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:18:57.253586 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:07.495755 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:19:27.977766 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:20:08.939866 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:21:30.861287 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-955523 -n functional-955523
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2025-10-18 12:22:43.157891223 +0000 UTC m=+1322.415529332
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-955523 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-955523 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-955523/192.168.49.2
Start Time:       Sat, 18 Oct 2025 12:18:42 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jtx6g (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-jtx6g:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  4m1s               default-scheduler  Successfully assigned default/nginx-svc to functional-955523
Warning  Failed     2m40s              kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    68s (x5 over 4m)   kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     67s (x4 over 4m)   kubelet            Failed to pull image "docker.io/nginx:alpine": failed to pull and unpack image "docker.io/library/nginx:alpine": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/library/nginx/manifests/sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22: 429 Too Many Requests - Server message: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     67s (x5 over 4m)   kubelet            Error: ErrImagePull
Normal   BackOff    11s (x14 over 4m)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     11s (x14 over 4m)  kubelet            Error: ImagePullBackOff
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-955523 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-955523 logs nginx-svc -n default: exit status 1 (95.727376ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-955523 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1018 12:22:43.341723 2076961 retry.go:31] will retry after 2.564449516s: Temporary Error: Get "http:": http: no Host in request URL
I1018 12:22:45.906317 2076961 retry.go:31] will retry after 3.244757422s: Temporary Error: Get "http:": http: no Host in request URL
I1018 12:22:49.151587 2076961 retry.go:31] will retry after 4.673096528s: Temporary Error: Get "http:": http: no Host in request URL
I1018 12:22:53.824900 2076961 retry.go:31] will retry after 12.516461591s: Temporary Error: Get "http:": http: no Host in request URL
I1018 12:23:06.342007 2076961 retry.go:31] will retry after 17.320164493s: Temporary Error: Get "http:": http: no Host in request URL
I1018 12:23:23.662789 2076961 retry.go:31] will retry after 23.241172622s: Temporary Error: Get "http:": http: no Host in request URL
I1018 12:23:46.905082 2076961 retry.go:31] will retry after 49.563439893s: Temporary Error: Get "http:": http: no Host in request URL
E1018 12:23:46.997521 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:24:14.703127 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-955523 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
nginx-svc   LoadBalancer   10.111.46.35   10.111.46.35   80:30541/TCP   5m54s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 service --namespace=default --https --url hello-node: exit status 115 (407.098688ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32621
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-955523 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 service hello-node --url --format={{.IP}}: exit status 115 (391.382899ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-955523 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 service hello-node --url: exit status 115 (428.614222ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32621
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-955523 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32621
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    

Test pass (289/331)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.84
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 5.42
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 172.65
29 TestAddons/serial/Volcano 39.7
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.85
35 TestAddons/parallel/Registry 16.91
36 TestAddons/parallel/RegistryCreds 0.74
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.87
41 TestAddons/parallel/CSI 38.25
42 TestAddons/parallel/Headlamp 17.89
43 TestAddons/parallel/CloudSpanner 5.63
45 TestAddons/parallel/NvidiaDevicePlugin 6.65
46 TestAddons/parallel/Yakd 11.81
48 TestAddons/StoppedEnableDisable 12.32
49 TestCertOptions 38.68
50 TestCertExpiration 235.25
52 TestForceSystemdFlag 34.44
53 TestForceSystemdEnv 42.93
59 TestErrorSpam/setup 32.91
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.1
62 TestErrorSpam/pause 1.98
63 TestErrorSpam/unpause 1.98
64 TestErrorSpam/stop 1.67
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.61
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 7.12
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.55
76 TestFunctional/serial/CacheCmd/cache/add_local 1.29
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.94
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 40.78
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.51
87 TestFunctional/serial/LogsFileCmd 1.51
88 TestFunctional/serial/InvalidService 4.75
90 TestFunctional/parallel/ConfigCmd 0.47
92 TestFunctional/parallel/DryRun 0.66
93 TestFunctional/parallel/InternationalLanguage 0.23
94 TestFunctional/parallel/StatusCmd 1.05
99 TestFunctional/parallel/AddonsCmd 0.23
102 TestFunctional/parallel/SSHCmd 0.57
103 TestFunctional/parallel/CpCmd 2.08
105 TestFunctional/parallel/FileSync 0.36
106 TestFunctional/parallel/CertSync 2.22
110 TestFunctional/parallel/NodeLabels 0.13
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
114 TestFunctional/parallel/License 0.45
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 1.17
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.45
122 TestFunctional/parallel/ImageCommands/Setup 0.65
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.45
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.39
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.66
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/List 0.35
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
150 TestFunctional/parallel/ProfileCmd/profile_list 0.42
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
152 TestFunctional/parallel/MountCmd/any-port 7.67
153 TestFunctional/parallel/MountCmd/specific-port 1.74
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.95
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 201.46
163 TestMultiControlPlane/serial/DeployApp 7.4
164 TestMultiControlPlane/serial/PingHostFromPods 1.65
165 TestMultiControlPlane/serial/AddWorkerNode 59.81
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 19.96
169 TestMultiControlPlane/serial/StopSecondaryNode 12.92
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.66
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 96.73
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.95
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.11
177 TestMultiControlPlane/serial/RestartCluster 59.93
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 96.55
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
184 TestJSONOutput/start/Command 83.17
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.72
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/unpause/Command 0.61
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.08
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.23
209 TestKicCustomNetwork/create_custom_network 44.71
210 TestKicCustomNetwork/use_default_bridge_network 40.93
211 TestKicExistingNetwork 35.3
212 TestKicCustomSubnet 36.2
213 TestKicStaticIP 37.44
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 75.46
218 TestMountStart/serial/StartWithMountFirst 7.67
219 TestMountStart/serial/VerifyMountFirst 0.28
220 TestMountStart/serial/StartWithMountSecond 6.06
221 TestMountStart/serial/VerifyMountSecond 0.26
222 TestMountStart/serial/DeleteFirst 1.7
223 TestMountStart/serial/VerifyMountPostDelete 0.26
224 TestMountStart/serial/Stop 1.31
225 TestMountStart/serial/RestartStopped 7.97
226 TestMountStart/serial/VerifyMountPostStop 0.28
229 TestMultiNode/serial/FreshStart2Nodes 111.8
230 TestMultiNode/serial/DeployApp2Nodes 5.58
231 TestMultiNode/serial/PingHostFrom2Pods 1
232 TestMultiNode/serial/AddNode 55.38
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.72
235 TestMultiNode/serial/CopyFile 10.42
236 TestMultiNode/serial/StopNode 2.38
237 TestMultiNode/serial/StartAfterStop 8.3
238 TestMultiNode/serial/RestartKeepsNodes 78.57
239 TestMultiNode/serial/DeleteNode 5.64
240 TestMultiNode/serial/StopMultiNode 24.07
241 TestMultiNode/serial/RestartMultiNode 48.57
242 TestMultiNode/serial/ValidateNameConflict 32.57
247 TestPreload 118.93
249 TestScheduledStopUnix 109.18
252 TestInsufficientStorage 13.66
253 TestRunningBinaryUpgrade 73.81
255 TestKubernetesUpgrade 101.18
256 TestMissingContainerUpgrade 142.64
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 43.44
260 TestNoKubernetes/serial/StartWithStopK8s 25.06
261 TestNoKubernetes/serial/Start 8.05
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
263 TestNoKubernetes/serial/ProfileList 0.68
264 TestNoKubernetes/serial/Stop 1.34
265 TestNoKubernetes/serial/StartNoArgs 6.92
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
267 TestStoppedBinaryUpgrade/Setup 1.74
268 TestStoppedBinaryUpgrade/Upgrade 66.33
269 TestStoppedBinaryUpgrade/MinikubeLogs 2.25
278 TestPause/serial/Start 86.3
286 TestNetworkPlugins/group/false 4.95
287 TestPause/serial/SecondStartNoReconfiguration 7.11
291 TestPause/serial/Pause 0.84
292 TestPause/serial/VerifyStatus 0.43
293 TestPause/serial/Unpause 0.79
294 TestPause/serial/PauseAgain 1.14
295 TestPause/serial/DeletePaused 5.36
296 TestPause/serial/VerifyDeletedResources 1.09
298 TestStartStop/group/old-k8s-version/serial/FirstStart 62.25
299 TestStartStop/group/old-k8s-version/serial/DeployApp 9.4
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
301 TestStartStop/group/old-k8s-version/serial/Stop 12.06
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
303 TestStartStop/group/old-k8s-version/serial/SecondStart 50.23
304 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
305 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
306 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
307 TestStartStop/group/old-k8s-version/serial/Pause 4
309 TestStartStop/group/embed-certs/serial/FirstStart 61.7
311 TestStartStop/group/no-preload/serial/FirstStart 76.26
312 TestStartStop/group/embed-certs/serial/DeployApp 9.42
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
314 TestStartStop/group/embed-certs/serial/Stop 12.14
315 TestStartStop/group/no-preload/serial/DeployApp 10.44
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/embed-certs/serial/SecondStart 52.64
318 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.6
319 TestStartStop/group/no-preload/serial/Stop 12.88
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
321 TestStartStop/group/no-preload/serial/SecondStart 51.39
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
324 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
325 TestStartStop/group/embed-certs/serial/Pause 3.14
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.94
328 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
331 TestStartStop/group/no-preload/serial/Pause 3.98
333 TestStartStop/group/newest-cni/serial/FirstStart 43.69
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
336 TestStartStop/group/newest-cni/serial/Stop 1.33
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
338 TestStartStop/group/newest-cni/serial/SecondStart 15.11
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
342 TestStartStop/group/newest-cni/serial/Pause 3.47
343 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.59
344 TestNetworkPlugins/group/auto/Start 58.92
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.85
346 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.31
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.27
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.4
349 TestNetworkPlugins/group/auto/KubeletFlags 0.31
350 TestNetworkPlugins/group/auto/NetCatPod 9.28
351 TestNetworkPlugins/group/auto/DNS 0.18
352 TestNetworkPlugins/group/auto/Localhost 0.16
353 TestNetworkPlugins/group/auto/HairPin 0.15
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.24
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.18
358 TestNetworkPlugins/group/kindnet/Start 88.61
359 TestNetworkPlugins/group/calico/Start 60.13
360 TestNetworkPlugins/group/calico/ControllerPod 6
361 TestNetworkPlugins/group/calico/KubeletFlags 0.33
362 TestNetworkPlugins/group/calico/NetCatPod 10.28
363 TestNetworkPlugins/group/calico/DNS 0.22
364 TestNetworkPlugins/group/calico/Localhost 0.17
365 TestNetworkPlugins/group/calico/HairPin 0.15
366 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
368 TestNetworkPlugins/group/kindnet/NetCatPod 9.32
369 TestNetworkPlugins/group/kindnet/DNS 0.22
370 TestNetworkPlugins/group/kindnet/Localhost 0.2
371 TestNetworkPlugins/group/kindnet/HairPin 0.19
372 TestNetworkPlugins/group/custom-flannel/Start 71.37
373 TestNetworkPlugins/group/enable-default-cni/Start 51.04
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.29
378 TestNetworkPlugins/group/custom-flannel/DNS 0.18
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
384 TestNetworkPlugins/group/flannel/Start 63.21
385 TestNetworkPlugins/group/bridge/Start 84.44
386 TestNetworkPlugins/group/flannel/ControllerPod 6.01
387 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
388 TestNetworkPlugins/group/flannel/NetCatPod 10.28
389 TestNetworkPlugins/group/flannel/DNS 0.17
390 TestNetworkPlugins/group/flannel/Localhost 0.16
391 TestNetworkPlugins/group/flannel/HairPin 0.16
392 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
393 TestNetworkPlugins/group/bridge/NetCatPod 10.41
394 TestNetworkPlugins/group/bridge/DNS 0.23
395 TestNetworkPlugins/group/bridge/Localhost 0.19
396 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.28.0/json-events (5.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-038567 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-038567 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.839590296s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 12:00:46.624049 2076961 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1018 12:00:46.624142 2076961 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-038567
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-038567: exit status 85 (102.375449ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-038567 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-038567 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:00:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:00:40.824707 2076966 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:00:40.824899 2076966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:40.824926 2076966 out.go:374] Setting ErrFile to fd 2...
	I1018 12:00:40.824945 2076966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:40.825254 2076966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	W1018 12:00:40.825430 2076966 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21647-2075029/.minikube/config/config.json: open /home/jenkins/minikube-integration/21647-2075029/.minikube/config/config.json: no such file or directory
	I1018 12:00:40.825895 2076966 out.go:368] Setting JSON to true
	I1018 12:00:40.826772 2076966 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":49388,"bootTime":1760739453,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:00:40.826863 2076966 start.go:141] virtualization:  
	I1018 12:00:40.830876 2076966 out.go:99] [download-only-038567] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1018 12:00:40.831054 2076966 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 12:00:40.831158 2076966 notify.go:220] Checking for updates...
	I1018 12:00:40.834579 2076966 out.go:171] MINIKUBE_LOCATION=21647
	I1018 12:00:40.837693 2076966 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:00:40.840542 2076966 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:00:40.843416 2076966 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:00:40.846234 2076966 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 12:00:40.851930 2076966 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 12:00:40.852238 2076966 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:00:40.882153 2076966 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:00:40.882270 2076966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:40.942450 2076966 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 12:00:40.933108295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:40.942558 2076966 docker.go:318] overlay module found
	I1018 12:00:40.945449 2076966 out.go:99] Using the docker driver based on user configuration
	I1018 12:00:40.945487 2076966 start.go:305] selected driver: docker
	I1018 12:00:40.945499 2076966 start.go:925] validating driver "docker" against <nil>
	I1018 12:00:40.945594 2076966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:41.009072 2076966 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 12:00:40.999378942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:41.009228 2076966 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:00:41.009551 2076966 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 12:00:41.009701 2076966 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 12:00:41.012848 2076966 out.go:171] Using Docker driver with root privileges
	I1018 12:00:41.015765 2076966 cni.go:84] Creating CNI manager for ""
	I1018 12:00:41.015834 2076966 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:00:41.015877 2076966 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:00:41.015983 2076966 start.go:349] cluster config:
	{Name:download-only-038567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-038567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:41.018980 2076966 out.go:99] Starting "download-only-038567" primary control-plane node in "download-only-038567" cluster
	I1018 12:00:41.019005 2076966 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1018 12:00:41.021820 2076966 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:00:41.021867 2076966 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1018 12:00:41.021954 2076966 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:00:41.037516 2076966 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:00:41.037707 2076966 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 12:00:41.037809 2076966 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:00:41.075370 2076966 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1018 12:00:41.075398 2076966 cache.go:58] Caching tarball of preloaded images
	I1018 12:00:41.075565 2076966 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1018 12:00:41.078895 2076966 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 12:00:41.078928 2076966 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1018 12:00:41.170274 2076966 preload.go:290] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1018 12:00:41.170403 2076966 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1018 12:00:45.270905 2076966 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1018 12:00:45.271702 2076966 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/download-only-038567/config.json ...
	I1018 12:00:45.271802 2076966 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/download-only-038567/config.json: {Name:mkb997510ea0efdbee679ea9c9776d208af37d26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:45.272329 2076966 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1018 12:00:45.272834 2076966 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-038567 host does not exist
	  To start a cluster, run: "minikube start -p download-only-038567"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-038567
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-110073 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-110073 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.422420577s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 12:00:52.498833 2076961 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1018 12:00:52.498869 2076961 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-110073
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-110073: exit status 85 (92.069939ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-038567 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-038567 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ delete  │ -p download-only-038567                                                                                                                                                               │ download-only-038567 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │ 18 Oct 25 12:00 UTC │
	│ start   │ -o=json --download-only -p download-only-110073 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-110073 │ jenkins │ v1.37.0 │ 18 Oct 25 12:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 12:00:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 12:00:47.125414 2077170 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:00:47.125604 2077170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:47.125630 2077170 out.go:374] Setting ErrFile to fd 2...
	I1018 12:00:47.125649 2077170 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:00:47.126063 2077170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:00:47.127067 2077170 out.go:368] Setting JSON to true
	I1018 12:00:47.128018 2077170 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":49395,"bootTime":1760739453,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:00:47.128091 2077170 start.go:141] virtualization:  
	I1018 12:00:47.131477 2077170 out.go:99] [download-only-110073] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:00:47.131730 2077170 notify.go:220] Checking for updates...
	I1018 12:00:47.134640 2077170 out.go:171] MINIKUBE_LOCATION=21647
	I1018 12:00:47.137748 2077170 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:00:47.140797 2077170 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:00:47.143882 2077170 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:00:47.146968 2077170 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 12:00:47.152691 2077170 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 12:00:47.152985 2077170 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:00:47.182406 2077170 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:00:47.182524 2077170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:47.246111 2077170 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-18 12:00:47.237211894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:47.246212 2077170 docker.go:318] overlay module found
	I1018 12:00:47.249183 2077170 out.go:99] Using the docker driver based on user configuration
	I1018 12:00:47.249215 2077170 start.go:305] selected driver: docker
	I1018 12:00:47.249222 2077170 start.go:925] validating driver "docker" against <nil>
	I1018 12:00:47.249333 2077170 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:00:47.303637 2077170 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-18 12:00:47.294568204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:00:47.303814 2077170 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 12:00:47.304155 2077170 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 12:00:47.304314 2077170 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 12:00:47.307474 2077170 out.go:171] Using Docker driver with root privileges
	I1018 12:00:47.310302 2077170 cni.go:84] Creating CNI manager for ""
	I1018 12:00:47.310376 2077170 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1018 12:00:47.310397 2077170 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 12:00:47.310468 2077170 start.go:349] cluster config:
	{Name:download-only-110073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-110073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:00:47.313467 2077170 out.go:99] Starting "download-only-110073" primary control-plane node in "download-only-110073" cluster
	I1018 12:00:47.313501 2077170 cache.go:123] Beginning downloading kic base image for docker with containerd
	I1018 12:00:47.316411 2077170 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 12:00:47.316442 2077170 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:00:47.316550 2077170 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 12:00:47.331876 2077170 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 12:00:47.332014 2077170 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 12:00:47.332034 2077170 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 12:00:47.332038 2077170 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 12:00:47.332046 2077170 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 12:00:47.370853 2077170 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1018 12:00:47.370876 2077170 cache.go:58] Caching tarball of preloaded images
	I1018 12:00:47.371045 2077170 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:00:47.374186 2077170 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1018 12:00:47.374216 2077170 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1018 12:00:47.461560 2077170 preload.go:290] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1018 12:00:47.461611 2077170 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1018 12:00:51.867805 2077170 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1018 12:00:51.868199 2077170 profile.go:143] Saving config to /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/download-only-110073/config.json ...
	I1018 12:00:51.868235 2077170 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/download-only-110073/config.json: {Name:mkd4762f7838f0318ff2ab17429e85acf9d25057 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 12:00:51.868420 2077170 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1018 12:00:51.868579 2077170 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21647-2075029/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-110073 host does not exist
	  To start a cluster, run: "minikube start -p download-only-110073"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-110073
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 12:00:53.624848 2076961 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-441377 --alsologtostderr --binary-mirror http://127.0.0.1:43257 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-441377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-441377
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-897172
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-897172: exit status 85 (69.518417ms)

                                                
                                                
-- stdout --
	* Profile "addons-897172" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-897172"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-897172
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-897172: exit status 85 (79.958433ms)

                                                
                                                
-- stdout --
	* Profile "addons-897172" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-897172"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (172.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-897172 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-897172 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.645289298s)
--- PASS: TestAddons/Setup (172.65s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.7s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 60.234886ms
addons_test.go:868: volcano-scheduler stabilized in 60.596715ms
addons_test.go:884: volcano-controller stabilized in 60.625654ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-9czgc" [09a05a38-8257-4b92-b360-d0e8616f1e84] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003312737s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-5p8rs" [4aa3da20-6d1c-475f-a7da-9180ccdc2aa4] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002977673s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-kg7hw" [3060e71f-1999-48a5-b191-fd8498ee09ee] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003473462s
addons_test.go:903: (dbg) Run:  kubectl --context addons-897172 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-897172 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-897172 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [51c38b2a-304c-4e67-803a-354219a397b5] Pending
helpers_test.go:352: "test-job-nginx-0" [51c38b2a-304c-4e67-803a-354219a397b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [51c38b2a-304c-4e67-803a-354219a397b5] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003538442s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 addons disable volcano --alsologtostderr -v=1: (12.093179065s)
--- PASS: TestAddons/serial/Volcano (39.70s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-897172 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-897172 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-897172 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-897172 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c1c5ee4d-e324-4a8c-b2dc-72ef66940dda] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c1c5ee4d-e324-4a8c-b2dc-72ef66940dda] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003856696s
addons_test.go:694: (dbg) Run:  kubectl --context addons-897172 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-897172 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-897172 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-897172 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.85s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.228394ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-m2bbx" [1ff58f99-278b-411c-a5e0-83d367ff2f01] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004021969s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-mhx2x" [5d015e78-b666-4fb1-b7f4-92ef0c57241e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003240721s
addons_test.go:392: (dbg) Run:  kubectl --context addons-897172 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-897172 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-897172 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.845586822s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 ip
2025/10/18 12:05:01 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.91s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.745841ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-897172
addons_test.go:332: (dbg) Run:  kubectl --context addons-897172 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-6z8nf" [810a780c-a1b0-4df6-9999-a54c36ed8b53] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003146415s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.321084ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-l9lnl" [6f318350-01b1-4662-8b11-9623be573bf2] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003533634s
addons_test.go:463: (dbg) Run:  kubectl --context addons-897172 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 12:05:26.031067 2076961 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 12:05:26.036269 2076961 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 12:05:26.036296 2076961 kapi.go:107] duration metric: took 8.092332ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.102474ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-897172 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-897172 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a7a9f2c7-c681-48dc-a0bf-494d0cb3f0ad] Pending
helpers_test.go:352: "task-pv-pod" [a7a9f2c7-c681-48dc-a0bf-494d0cb3f0ad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a7a9f2c7-c681-48dc-a0bf-494d0cb3f0ad] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003289887s
addons_test.go:572: (dbg) Run:  kubectl --context addons-897172 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-897172 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-897172 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-897172 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-897172 delete pod task-pv-pod: (1.141976072s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-897172 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-897172 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-897172 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-897172 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d0303264-7331-4b1f-a148-6d21a2768d97] Pending
helpers_test.go:352: "task-pv-pod-restore" [d0303264-7331-4b1f-a148-6d21a2768d97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d0303264-7331-4b1f-a148-6d21a2768d97] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00344269s
addons_test.go:614: (dbg) Run:  kubectl --context addons-897172 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-897172 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-897172 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.770138903s)
--- PASS: TestAddons/parallel/CSI (38.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-897172 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-897172 --alsologtostderr -v=1: (1.065082176s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-qmcg6" [ac5c743d-391d-4227-bfc3-597c7c1f6e99] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-qmcg6" [ac5c743d-391d-4227-bfc3-597c7c1f6e99] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004222334s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 addons disable headlamp --alsologtostderr -v=1: (5.816275155s)
--- PASS: TestAddons/parallel/Headlamp (17.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-nfcft" [2eef8558-cae4-4d3b-b963-c66a30be4a19] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003382248s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9gjsj" [1564a3ac-4f7b-4ea4-be9f-210c9a9f2c5f] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00329327s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-t52bk" [cd929b31-5269-48b3-99f2-0043749a72e4] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003236923s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-897172 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-897172 addons disable yakd --alsologtostderr -v=1: (5.802015395s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-897172
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-897172: (12.054385428s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-897172
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-897172
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-897172
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (38.68s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-503187 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-503187 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.732147909s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-503187 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-503187 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-503187 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-503187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-503187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-503187: (2.215460914s)
--- PASS: TestCertOptions (38.68s)

                                                
                                    
x
+
TestCertExpiration (235.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-322545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-322545 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.010277012s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-322545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-322545 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.500093589s)
helpers_test.go:175: Cleaning up "cert-expiration-322545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-322545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-322545: (4.743784726s)
--- PASS: TestCertExpiration (235.25s)

                                                
                                    
x
+
TestForceSystemdFlag (34.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-328458 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-328458 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.633288557s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-328458 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-328458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-328458
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-328458: (2.498022427s)
--- PASS: TestForceSystemdFlag (34.44s)

                                                
                                    
x
+
TestForceSystemdEnv (42.93s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-475987 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-475987 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.438876303s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-475987 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-475987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-475987
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-475987: (3.048573967s)
--- PASS: TestForceSystemdEnv (42.93s)

                                                
                                    
x
+
TestErrorSpam/setup (32.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-515243 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-515243 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-515243 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-515243 --driver=docker  --container-runtime=containerd: (32.909216866s)
--- PASS: TestErrorSpam/setup (32.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 pause
--- PASS: TestErrorSpam/pause (1.98s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 unpause
--- PASS: TestErrorSpam/unpause (1.98s)

                                                
                                    
x
+
TestErrorSpam/stop (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 stop: (1.474093401s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-515243 --log_dir /tmp/nospam-515243 stop
--- PASS: TestErrorSpam/stop (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21647-2075029/.minikube/files/etc/test/nested/copy/2076961/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-955523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-955523 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m20.612888401s)
--- PASS: TestFunctional/serial/StartWithProxy (80.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.12s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 12:17:29.294788 2076961 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-955523 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-955523 --alsologtostderr -v=8: (7.121118532s)
functional_test.go:678: soft start took 7.122438505s for "functional-955523" cluster.
I1018 12:17:36.416307 2076961 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.12s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-955523 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 cache add registry.k8s.io/pause:3.1: (1.342386955s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 cache add registry.k8s.io/pause:3.3: (1.138006737s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 cache add registry.k8s.io/pause:latest: (1.071500776s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-955523 /tmp/TestFunctionalserialCacheCmdcacheadd_local256921319/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cache add minikube-local-cache-test:functional-955523
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cache delete minikube-local-cache-test:functional-955523
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-955523
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (319.674476ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 kubectl -- --context functional-955523 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-955523 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.78s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-955523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-955523 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.784474813s)
functional_test.go:776: restart took 40.784572247s for "functional-955523" cluster.
I1018 12:18:24.963755 2076961 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.78s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-955523 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 logs: (1.512877238s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 logs --file /tmp/TestFunctionalserialLogsFileCmd1625816713/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 logs --file /tmp/TestFunctionalserialLogsFileCmd1625816713/001/logs.txt: (1.507960968s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-955523 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-955523
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-955523: exit status 115 (404.078244ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32323 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-955523 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-955523 delete -f testdata/invalidsvc.yaml: (1.088492349s)
--- PASS: TestFunctional/serial/InvalidService (4.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 config get cpus: exit status 14 (92.156256ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 config get cpus: exit status 14 (61.930799ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (204.021982ms)

                                                
                                                
-- stdout --
	* [functional-955523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:28:59.674558 2121002 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:28:59.674941 2121002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:28:59.674981 2121002 out.go:374] Setting ErrFile to fd 2...
	I1018 12:28:59.675001 2121002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:28:59.675378 2121002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:28:59.675906 2121002 out.go:368] Setting JSON to false
	I1018 12:28:59.677020 2121002 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51087,"bootTime":1760739453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:28:59.677123 2121002 start.go:141] virtualization:  
	I1018 12:28:59.680813 2121002 out.go:179] * [functional-955523] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 12:28:59.683832 2121002 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:28:59.684019 2121002 notify.go:220] Checking for updates...
	I1018 12:28:59.689690 2121002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:28:59.692525 2121002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:28:59.695496 2121002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:28:59.698327 2121002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:28:59.701212 2121002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:28:59.704564 2121002 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:28:59.705158 2121002 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:28:59.740938 2121002 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:28:59.741054 2121002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:28:59.814226 2121002 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:28:59.804762187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:28:59.814342 2121002 docker.go:318] overlay module found
	I1018 12:28:59.817398 2121002 out.go:179] * Using the docker driver based on existing profile
	I1018 12:28:59.820198 2121002 start.go:305] selected driver: docker
	I1018 12:28:59.820217 2121002 start.go:925] validating driver "docker" against &{Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:28:59.820326 2121002 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:28:59.823900 2121002 out.go:203] 
	W1018 12:28:59.826773 2121002 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 12:28:59.829653 2121002 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-955523 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-955523 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (228.955219ms)

                                                
                                                
-- stdout --
	* [functional-955523] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:29:00.371311 2121122 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:29:00.371573 2121122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:00.371611 2121122 out.go:374] Setting ErrFile to fd 2...
	I1018 12:29:00.371634 2121122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:29:00.373612 2121122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:29:00.374353 2121122 out.go:368] Setting JSON to false
	I1018 12:29:00.375500 2121122 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":51088,"bootTime":1760739453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 12:29:00.375628 2121122 start.go:141] virtualization:  
	I1018 12:29:00.380667 2121122 out.go:179] * [functional-955523] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 12:29:00.387237 2121122 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 12:29:00.387251 2121122 notify.go:220] Checking for updates...
	I1018 12:29:00.394014 2121122 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 12:29:00.396938 2121122 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 12:29:00.400032 2121122 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 12:29:00.403051 2121122 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 12:29:00.406164 2121122 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 12:29:00.409911 2121122 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:29:00.410613 2121122 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 12:29:00.440054 2121122 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 12:29:00.440215 2121122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:29:00.503497 2121122 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 12:29:00.49269586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:29:00.503625 2121122 docker.go:318] overlay module found
	I1018 12:29:00.506801 2121122 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 12:29:00.509581 2121122 start.go:305] selected driver: docker
	I1018 12:29:00.509606 2121122 start.go:925] validating driver "docker" against &{Name:functional-955523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-955523 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 12:29:00.509727 2121122 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 12:29:00.513382 2121122 out.go:203] 
	W1018 12:29:00.516345 2121122 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 12:29:00.519132 2121122 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh -n functional-955523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cp functional-955523:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4255148809/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh -n functional-955523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh -n functional-955523 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/2076961/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo cat /etc/test/nested/copy/2076961/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/2076961.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo cat /etc/ssl/certs/2076961.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/2076961.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo cat /usr/share/ca-certificates/2076961.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/20769612.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo cat /etc/ssl/certs/20769612.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/20769612.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo cat /usr/share/ca-certificates/20769612.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-955523 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh "sudo systemctl is-active docker": exit status 1 (375.344619ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh "sudo systemctl is-active crio": exit status 1 (354.304476ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 version -o=json --components: (1.165109346s)
--- PASS: TestFunctional/parallel/Version/components (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-955523 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-955523
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-955523
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-955523 image ls --format short --alsologtostderr:
I1018 12:34:04.885799 2122181 out.go:360] Setting OutFile to fd 1 ...
I1018 12:34:04.885992 2122181 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:04.886019 2122181 out.go:374] Setting ErrFile to fd 2...
I1018 12:34:04.886038 2122181 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:04.886450 2122181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
I1018 12:34:04.887104 2122181 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:04.887280 2122181 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:04.887942 2122181 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
I1018 12:34:04.904806 2122181 ssh_runner.go:195] Run: systemctl --version
I1018 12:34:04.904857 2122181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
I1018 12:34:04.921543 2122181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
I1018 12:34:05.031108 2122181 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-955523 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-955523  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ localhost/my-image                          │ functional-955523  │ sha256:e1c542 │ 831kB  │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ docker.io/library/minikube-local-cache-test │ functional-955523  │ sha256:7c8c3f │ 992B   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-955523 image ls --format table --alsologtostderr:
I1018 12:34:09.037365 2122540 out.go:360] Setting OutFile to fd 1 ...
I1018 12:34:09.037500 2122540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:09.037512 2122540 out.go:374] Setting ErrFile to fd 2...
I1018 12:34:09.037530 2122540 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:09.037800 2122540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
I1018 12:34:09.038440 2122540 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:09.038605 2122540 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:09.039097 2122540 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
I1018 12:34:09.056150 2122540 ssh_runner.go:195] Run: systemctl --version
I1018 12:34:09.056210 2122540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
I1018 12:34:09.073113 2122540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
I1018 12:34:09.174385 2122540 ssh_runner.go:195] Run: sudo crictl images --output json
E1018 12:35:10.065475 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-955523 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-955523"],"size":"2173567"},{"id":"sha256:7c8c3fe6a7480adce0da5f6697453acd690315d6016a53378d4c2d4df89436ed","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-955523"],"size":"992"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.
28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6
fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.i
o/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:e1c54244d4a3a9371fe9d82e788de484e28b61001ef4733ba0dcf929677bf48f","repoDigests":[],"repoTags":["localhost/my-image:functional-955523"],"size":"830616"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-955523 image ls --format json --alsologtostderr:
I1018 12:34:08.801299 2122503 out.go:360] Setting OutFile to fd 1 ...
I1018 12:34:08.801453 2122503 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:08.801483 2122503 out.go:374] Setting ErrFile to fd 2...
I1018 12:34:08.801499 2122503 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:08.801868 2122503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
I1018 12:34:08.802602 2122503 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:08.802774 2122503 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:08.803381 2122503 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
I1018 12:34:08.823483 2122503 ssh_runner.go:195] Run: systemctl --version
I1018 12:34:08.823564 2122503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
I1018 12:34:08.839919 2122503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
I1018 12:34:08.946387 2122503 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-955523 image ls --format yaml --alsologtostderr:
- id: sha256:7c8c3fe6a7480adce0da5f6697453acd690315d6016a53378d4c2d4df89436ed
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-955523
size: "992"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-955523
size: "2173567"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-955523 image ls --format yaml --alsologtostderr:
I1018 12:34:05.115338 2122217 out.go:360] Setting OutFile to fd 1 ...
I1018 12:34:05.115464 2122217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:05.115474 2122217 out.go:374] Setting ErrFile to fd 2...
I1018 12:34:05.115479 2122217 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:05.115729 2122217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
I1018 12:34:05.116421 2122217 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:05.116547 2122217 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:05.116991 2122217 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
I1018 12:34:05.136031 2122217 ssh_runner.go:195] Run: systemctl --version
I1018 12:34:05.136085 2122217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
I1018 12:34:05.153519 2122217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
I1018 12:34:05.258272 2122217 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh pgrep buildkitd: exit status 1 (263.682907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image build -t localhost/my-image:functional-955523 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 image build -t localhost/my-image:functional-955523 testdata/build --alsologtostderr: (2.959431501s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-955523 image build -t localhost/my-image:functional-955523 testdata/build --alsologtostderr:
I1018 12:34:05.601903 2122313 out.go:360] Setting OutFile to fd 1 ...
I1018 12:34:05.603418 2122313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:05.603461 2122313 out.go:374] Setting ErrFile to fd 2...
I1018 12:34:05.603484 2122313 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 12:34:05.603802 2122313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
I1018 12:34:05.604497 2122313 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:05.607328 2122313 config.go:182] Loaded profile config "functional-955523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1018 12:34:05.607926 2122313 cli_runner.go:164] Run: docker container inspect functional-955523 --format={{.State.Status}}
I1018 12:34:05.632503 2122313 ssh_runner.go:195] Run: systemctl --version
I1018 12:34:05.632566 2122313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-955523
I1018 12:34:05.649745 2122313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35709 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/functional-955523/id_rsa Username:docker}
I1018 12:34:05.754677 2122313 build_images.go:161] Building image from path: /tmp/build.414085803.tar
I1018 12:34:05.754764 2122313 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 12:34:05.762608 2122313 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.414085803.tar
I1018 12:34:05.766706 2122313 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.414085803.tar: stat -c "%s %y" /var/lib/minikube/build/build.414085803.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.414085803.tar': No such file or directory
I1018 12:34:05.766736 2122313 ssh_runner.go:362] scp /tmp/build.414085803.tar --> /var/lib/minikube/build/build.414085803.tar (3072 bytes)
I1018 12:34:05.786201 2122313 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.414085803
I1018 12:34:05.794312 2122313 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.414085803 -xf /var/lib/minikube/build/build.414085803.tar
I1018 12:34:05.802584 2122313 containerd.go:394] Building image: /var/lib/minikube/build/build.414085803
I1018 12:34:05.802658 2122313 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.414085803 --local dockerfile=/var/lib/minikube/build/build.414085803 --output type=image,name=localhost/my-image:functional-955523
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ea34deee4c4c06eee2d7593349b5fa26e0a76c2ebfa1f869f847ae99fcd1ef84
#8 exporting manifest sha256:ea34deee4c4c06eee2d7593349b5fa26e0a76c2ebfa1f869f847ae99fcd1ef84 0.0s done
#8 exporting config sha256:e1c54244d4a3a9371fe9d82e788de484e28b61001ef4733ba0dcf929677bf48f 0.0s done
#8 naming to localhost/my-image:functional-955523 done
#8 DONE 0.2s
I1018 12:34:08.487817 2122313 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.414085803 --local dockerfile=/var/lib/minikube/build/build.414085803 --output type=image,name=localhost/my-image:functional-955523: (2.685129739s)
I1018 12:34:08.488008 2122313 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.414085803
I1018 12:34:08.496928 2122313 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.414085803.tar
I1018 12:34:08.505125 2122313 build_images.go:217] Built localhost/my-image:functional-955523 from /tmp/build.414085803.tar
I1018 12:34:08.505169 2122313 build_images.go:133] succeeded building to: functional-955523
I1018 12:34:08.505174 2122313 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-955523
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image load --daemon kicbase/echo-server:functional-955523 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 image load --daemon kicbase/echo-server:functional-955523 --alsologtostderr: (1.181926817s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image load --daemon kicbase/echo-server:functional-955523 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-955523 image load --daemon kicbase/echo-server:functional-955523 --alsologtostderr: (1.097270577s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-955523
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image load --daemon kicbase/echo-server:functional-955523 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image save kicbase/echo-server:functional-955523 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image rm kicbase/echo-server:functional-955523 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-955523
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 image save --daemon kicbase/echo-server:functional-955523 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-955523
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-955523 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-955523 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-955523 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 2116195: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-955523 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-955523 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-955523 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 service list -o json
functional_test.go:1504: Took "340.374966ms" to run "out/minikube-linux-arm64 -p functional-955523 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
E1018 12:28:46.997410 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "362.217365ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.701658ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "357.103937ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.278618ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdany-port749775523/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760790528268537598" to /tmp/TestFunctionalparallelMountCmdany-port749775523/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760790528268537598" to /tmp/TestFunctionalparallelMountCmdany-port749775523/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760790528268537598" to /tmp/TestFunctionalparallelMountCmdany-port749775523/001/test-1760790528268537598
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.261179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:28:48.628076 2076961 retry.go:31] will retry after 269.785222ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 12:28 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 12:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 12:28 test-1760790528268537598
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh cat /mount-9p/test-1760790528268537598
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-955523 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [1588a421-bed3-46aa-bc00-5fe89a9a782b] Pending
helpers_test.go:352: "busybox-mount" [1588a421-bed3-46aa-bc00-5fe89a9a782b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [1588a421-bed3-46aa-bc00-5fe89a9a782b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [1588a421-bed3-46aa-bc00-5fe89a9a782b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003992529s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-955523 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdany-port749775523/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdspecific-port817102592/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.18332ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:28:56.311799 2076961 retry.go:31] will retry after 320.761591ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdspecific-port817102592/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh "sudo umount -f /mount-9p": exit status 1 (266.463138ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-955523 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdspecific-port817102592/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T" /mount1: exit status 1 (525.78585ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 12:28:58.209789 2076961 retry.go:31] will retry after 499.987215ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-955523 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-955523 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-955523 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1692705097/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-955523
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-955523
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-955523
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m20.500644185s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (201.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 kubectl -- rollout status deployment/busybox: (4.335006121s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bldtc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bm8hb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-kbmwd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bldtc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bm8hb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-kbmwd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bldtc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bm8hb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-kbmwd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bldtc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bldtc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bm8hb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-bm8hb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-kbmwd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 kubectl -- exec busybox-7b57f96db7-kbmwd -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 node add --alsologtostderr -v 5: (58.703952707s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5: (1.104400451s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-577447 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.107524263s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 status --output json --alsologtostderr -v 5: (1.061588343s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp testdata/cp-test.txt ha-577447:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344297499/001/cp-test_ha-577447.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447:/home/docker/cp-test.txt ha-577447-m02:/home/docker/cp-test_ha-577447_ha-577447-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test_ha-577447_ha-577447-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447:/home/docker/cp-test.txt ha-577447-m03:/home/docker/cp-test_ha-577447_ha-577447-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test_ha-577447_ha-577447-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447:/home/docker/cp-test.txt ha-577447-m04:/home/docker/cp-test_ha-577447_ha-577447-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test_ha-577447_ha-577447-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp testdata/cp-test.txt ha-577447-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344297499/001/cp-test_ha-577447-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m02:/home/docker/cp-test.txt ha-577447:/home/docker/cp-test_ha-577447-m02_ha-577447.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test_ha-577447-m02_ha-577447.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m02:/home/docker/cp-test.txt ha-577447-m03:/home/docker/cp-test_ha-577447-m02_ha-577447-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test_ha-577447-m02_ha-577447-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m02:/home/docker/cp-test.txt ha-577447-m04:/home/docker/cp-test_ha-577447-m02_ha-577447-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test_ha-577447-m02_ha-577447-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp testdata/cp-test.txt ha-577447-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344297499/001/cp-test_ha-577447-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m03:/home/docker/cp-test.txt ha-577447:/home/docker/cp-test_ha-577447-m03_ha-577447.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test_ha-577447-m03_ha-577447.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m03:/home/docker/cp-test.txt ha-577447-m02:/home/docker/cp-test_ha-577447-m03_ha-577447-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test_ha-577447-m03_ha-577447-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m03:/home/docker/cp-test.txt ha-577447-m04:/home/docker/cp-test_ha-577447-m03_ha-577447-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test_ha-577447-m03_ha-577447-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp testdata/cp-test.txt ha-577447-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3344297499/001/cp-test_ha-577447-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test.txt"
E1018 12:43:37.941352 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:37.948026 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:37.959472 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:37.981078 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:38.023090 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:38.105481 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m04:/home/docker/cp-test.txt ha-577447:/home/docker/cp-test_ha-577447-m04_ha-577447.txt
E1018 12:43:38.267116 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:38.588790 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447 "sudo cat /home/docker/cp-test_ha-577447-m04_ha-577447.txt"
E1018 12:43:39.230761 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m04:/home/docker/cp-test.txt ha-577447-m02:/home/docker/cp-test_ha-577447-m04_ha-577447-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m02 "sudo cat /home/docker/cp-test_ha-577447-m04_ha-577447-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 cp ha-577447-m04:/home/docker/cp-test.txt ha-577447-m03:/home/docker/cp-test_ha-577447-m04_ha-577447-m03.txt
E1018 12:43:40.512640 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 ssh -n ha-577447-m03 "sudo cat /home/docker/cp-test_ha-577447-m04_ha-577447-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 node stop m02 --alsologtostderr -v 5
E1018 12:43:43.075214 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:46.997750 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:43:48.197160 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 node stop m02 --alsologtostderr -v 5: (12.114461387s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5: exit status 7 (807.006663ms)

                                                
                                                
-- stdout --
	ha-577447
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-577447-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-577447-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-577447-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:43:53.809121 2139563 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:43:53.809403 2139563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:43:53.809409 2139563 out.go:374] Setting ErrFile to fd 2...
	I1018 12:43:53.809414 2139563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:43:53.809734 2139563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:43:53.809964 2139563 out.go:368] Setting JSON to false
	I1018 12:43:53.809991 2139563 mustload.go:65] Loading cluster: ha-577447
	I1018 12:43:53.810493 2139563 config.go:182] Loaded profile config "ha-577447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:43:53.810517 2139563 status.go:174] checking status of ha-577447 ...
	I1018 12:43:53.811192 2139563 cli_runner.go:164] Run: docker container inspect ha-577447 --format={{.State.Status}}
	I1018 12:43:53.811701 2139563 notify.go:220] Checking for updates...
	I1018 12:43:53.838971 2139563 status.go:371] ha-577447 host status = "Running" (err=<nil>)
	I1018 12:43:53.838996 2139563 host.go:66] Checking if "ha-577447" exists ...
	I1018 12:43:53.839343 2139563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577447
	I1018 12:43:53.864394 2139563 host.go:66] Checking if "ha-577447" exists ...
	I1018 12:43:53.864711 2139563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:43:53.864762 2139563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577447
	I1018 12:43:53.884830 2139563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35714 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/ha-577447/id_rsa Username:docker}
	I1018 12:43:53.989588 2139563 ssh_runner.go:195] Run: systemctl --version
	I1018 12:43:53.995929 2139563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:43:54.011778 2139563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:43:54.086433 2139563 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-18 12:43:54.074487455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:43:54.087035 2139563 kubeconfig.go:125] found "ha-577447" server: "https://192.168.49.254:8443"
	I1018 12:43:54.087083 2139563 api_server.go:166] Checking apiserver status ...
	I1018 12:43:54.087130 2139563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:43:54.100485 2139563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1492/cgroup
	I1018 12:43:54.110216 2139563 api_server.go:182] apiserver freezer: "9:freezer:/docker/653d1b3e1f66e0feb7fd78a39182194b3f803a858c8140c72016a62469fbc206/kubepods/burstable/pod5bd898df71f89deb553cb45fe088d05c/1561feaccc0cd59244a553e537b744760fc7bd3380386cf12b8824dbb837d941"
	I1018 12:43:54.110295 2139563 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/653d1b3e1f66e0feb7fd78a39182194b3f803a858c8140c72016a62469fbc206/kubepods/burstable/pod5bd898df71f89deb553cb45fe088d05c/1561feaccc0cd59244a553e537b744760fc7bd3380386cf12b8824dbb837d941/freezer.state
	I1018 12:43:54.118055 2139563 api_server.go:204] freezer state: "THAWED"
	I1018 12:43:54.118087 2139563 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 12:43:54.126427 2139563 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 12:43:54.126452 2139563 status.go:463] ha-577447 apiserver status = Running (err=<nil>)
	I1018 12:43:54.126462 2139563 status.go:176] ha-577447 status: &{Name:ha-577447 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:43:54.126478 2139563 status.go:174] checking status of ha-577447-m02 ...
	I1018 12:43:54.126799 2139563 cli_runner.go:164] Run: docker container inspect ha-577447-m02 --format={{.State.Status}}
	I1018 12:43:54.151917 2139563 status.go:371] ha-577447-m02 host status = "Stopped" (err=<nil>)
	I1018 12:43:54.151939 2139563 status.go:384] host is not running, skipping remaining checks
	I1018 12:43:54.151946 2139563 status.go:176] ha-577447-m02 status: &{Name:ha-577447-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:43:54.151965 2139563 status.go:174] checking status of ha-577447-m03 ...
	I1018 12:43:54.152276 2139563 cli_runner.go:164] Run: docker container inspect ha-577447-m03 --format={{.State.Status}}
	I1018 12:43:54.172578 2139563 status.go:371] ha-577447-m03 host status = "Running" (err=<nil>)
	I1018 12:43:54.172599 2139563 host.go:66] Checking if "ha-577447-m03" exists ...
	I1018 12:43:54.172946 2139563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577447-m03
	I1018 12:43:54.190500 2139563 host.go:66] Checking if "ha-577447-m03" exists ...
	I1018 12:43:54.190816 2139563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:43:54.190867 2139563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577447-m03
	I1018 12:43:54.207799 2139563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35724 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/ha-577447-m03/id_rsa Username:docker}
	I1018 12:43:54.309232 2139563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:43:54.323067 2139563 kubeconfig.go:125] found "ha-577447" server: "https://192.168.49.254:8443"
	I1018 12:43:54.323095 2139563 api_server.go:166] Checking apiserver status ...
	I1018 12:43:54.323137 2139563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:43:54.339108 2139563 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1399/cgroup
	I1018 12:43:54.348424 2139563 api_server.go:182] apiserver freezer: "9:freezer:/docker/358e12446e0de92fcf899603374c58bf200024cdf56fbca346b568fef25219be/kubepods/burstable/pod644b7c35f1608eabb33f6a918e7ff841/d4d754684d1e65a6d26ce087b2a865214a71f8c38a6faf937b27a7d4fde094d5"
	I1018 12:43:54.348519 2139563 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/358e12446e0de92fcf899603374c58bf200024cdf56fbca346b568fef25219be/kubepods/burstable/pod644b7c35f1608eabb33f6a918e7ff841/d4d754684d1e65a6d26ce087b2a865214a71f8c38a6faf937b27a7d4fde094d5/freezer.state
	I1018 12:43:54.357180 2139563 api_server.go:204] freezer state: "THAWED"
	I1018 12:43:54.357213 2139563 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 12:43:54.365630 2139563 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 12:43:54.365658 2139563 status.go:463] ha-577447-m03 apiserver status = Running (err=<nil>)
	I1018 12:43:54.365668 2139563 status.go:176] ha-577447-m03 status: &{Name:ha-577447-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:43:54.365718 2139563 status.go:174] checking status of ha-577447-m04 ...
	I1018 12:43:54.366060 2139563 cli_runner.go:164] Run: docker container inspect ha-577447-m04 --format={{.State.Status}}
	I1018 12:43:54.382993 2139563 status.go:371] ha-577447-m04 host status = "Running" (err=<nil>)
	I1018 12:43:54.383019 2139563 host.go:66] Checking if "ha-577447-m04" exists ...
	I1018 12:43:54.383336 2139563 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-577447-m04
	I1018 12:43:54.400766 2139563 host.go:66] Checking if "ha-577447-m04" exists ...
	I1018 12:43:54.401072 2139563 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:43:54.401124 2139563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-577447-m04
	I1018 12:43:54.419034 2139563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35729 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/ha-577447-m04/id_rsa Username:docker}
	I1018 12:43:54.521105 2139563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:43:54.533797 2139563 status.go:176] ha-577447-m04 status: &{Name:ha-577447-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 node start m02 --alsologtostderr -v 5
E1018 12:43:58.438655 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 node start m02 --alsologtostderr -v 5: (13.260041715s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5: (1.292264958s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.078946038s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 stop --alsologtostderr -v 5
E1018 12:44:18.920013 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 stop --alsologtostderr -v 5: (37.492346134s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 start --wait true --alsologtostderr -v 5
E1018 12:44:59.882046 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 start --wait true --alsologtostderr -v 5: (59.072697473s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (96.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 node delete m03 --alsologtostderr -v 5: (9.985099579s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 stop --alsologtostderr -v 5
E1018 12:46:21.804029 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 stop --alsologtostderr -v 5: (35.996982565s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5: exit status 7 (113.255646ms)

                                                
                                                
-- stdout --
	ha-577447
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-577447-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-577447-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:46:35.655356 2154510 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:46:35.655485 2154510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:46:35.655496 2154510 out.go:374] Setting ErrFile to fd 2...
	I1018 12:46:35.655502 2154510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:46:35.655832 2154510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:46:35.656259 2154510 out.go:368] Setting JSON to false
	I1018 12:46:35.656510 2154510 notify.go:220] Checking for updates...
	I1018 12:46:35.656514 2154510 mustload.go:65] Loading cluster: ha-577447
	I1018 12:46:35.657256 2154510 config.go:182] Loaded profile config "ha-577447": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:46:35.657304 2154510 status.go:174] checking status of ha-577447 ...
	I1018 12:46:35.657847 2154510 cli_runner.go:164] Run: docker container inspect ha-577447 --format={{.State.Status}}
	I1018 12:46:35.676022 2154510 status.go:371] ha-577447 host status = "Stopped" (err=<nil>)
	I1018 12:46:35.676041 2154510 status.go:384] host is not running, skipping remaining checks
	I1018 12:46:35.676048 2154510 status.go:176] ha-577447 status: &{Name:ha-577447 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:46:35.676075 2154510 status.go:174] checking status of ha-577447-m02 ...
	I1018 12:46:35.676395 2154510 cli_runner.go:164] Run: docker container inspect ha-577447-m02 --format={{.State.Status}}
	I1018 12:46:35.697620 2154510 status.go:371] ha-577447-m02 host status = "Stopped" (err=<nil>)
	I1018 12:46:35.697643 2154510 status.go:384] host is not running, skipping remaining checks
	I1018 12:46:35.697651 2154510 status.go:176] ha-577447-m02 status: &{Name:ha-577447-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:46:35.697671 2154510 status.go:174] checking status of ha-577447-m04 ...
	I1018 12:46:35.697970 2154510 cli_runner.go:164] Run: docker container inspect ha-577447-m04 --format={{.State.Status}}
	I1018 12:46:35.714840 2154510 status.go:371] ha-577447-m04 host status = "Stopped" (err=<nil>)
	I1018 12:46:35.714861 2154510 status.go:384] host is not running, skipping remaining checks
	I1018 12:46:35.714868 2154510 status.go:176] ha-577447-m04 status: &{Name:ha-577447-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (58.965743014s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (96.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 node add --control-plane --alsologtostderr -v 5
E1018 12:48:37.940802 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:48:46.998222 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:49:05.645366 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 node add --control-plane --alsologtostderr -v 5: (1m35.441486934s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-577447 status --alsologtostderr -v 5: (1.108861978s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (96.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.076285474s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-862477 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-862477 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m23.161811895s)
--- PASS: TestJSONOutput/start/Command (83.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-862477 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-862477 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-862477 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-862477 --output=json --user=testUser: (6.0817654s)
--- PASS: TestJSONOutput/stop/Command (6.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-393811 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-393811 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (85.816046ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b7f7826a-1eab-4a39-9b4d-10b7b4c90d1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-393811] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3fdaac4-e2f3-4f85-a5a4-16d21be25757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"ac3e79a9-ed65-456f-a8e7-f93584a711ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fa2d3343-032d-4bcb-b163-f5aba3315403","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig"}}
	{"specversion":"1.0","id":"4e4ba8ff-6166-4e94-ab03-6783276e0cf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube"}}
	{"specversion":"1.0","id":"85b5a1a2-add8-41fa-a6be-1a28a90890ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e22278fd-ac56-4092-98c8-7b1298d99c26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"99067062-89d5-4001-a1ab-5300e28f8b07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-393811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-393811
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-781222 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-781222 --network=: (42.412088432s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-781222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-781222
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-781222: (2.269272077s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (40.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-812739 --network=bridge
E1018 12:51:50.067992 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-812739 --network=bridge: (38.780903399s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-812739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-812739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-812739: (2.121515421s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (40.93s)

                                                
                                    
x
+
TestKicExistingNetwork (35.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 12:52:23.979973 2076961 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 12:52:23.995717 2076961 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 12:52:23.995795 2076961 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 12:52:23.995814 2076961 cli_runner.go:164] Run: docker network inspect existing-network
W1018 12:52:24.013464 2076961 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 12:52:24.013501 2076961 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 12:52:24.013517 2076961 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 12:52:24.013651 2076961 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 12:52:24.032727 2076961 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-eb79eea33bc6 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a6:a5:1a:04:93:13} reservation:<nil>}
I1018 12:52:24.033027 2076961 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40015b9b40}
I1018 12:52:24.033052 2076961 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 12:52:24.033104 2076961 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 12:52:24.104288 2076961 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-314871 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-314871 --network=existing-network: (32.982682789s)
helpers_test.go:175: Cleaning up "existing-network-314871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-314871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-314871: (2.15517575s)
I1018 12:52:59.259066 2076961 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.30s)

                                                
                                    
x
+
TestKicCustomSubnet (36.2s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-310908 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-310908 --subnet=192.168.60.0/24: (33.9632727s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-310908 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-310908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-310908
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-310908: (2.203658244s)
--- PASS: TestKicCustomSubnet (36.20s)

                                                
                                    
x
+
TestKicStaticIP (37.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-171062 --static-ip=192.168.200.200
E1018 12:53:37.940955 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:53:47.002270 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-171062 --static-ip=192.168.200.200: (35.119783164s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-171062 ip
helpers_test.go:175: Cleaning up "static-ip-171062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-171062
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-171062: (2.169015767s)
--- PASS: TestKicStaticIP (37.44s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (75.46s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-355775 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-355775 --driver=docker  --container-runtime=containerd: (34.158841941s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-358376 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-358376 --driver=docker  --container-runtime=containerd: (35.332837426s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-355775
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-358376
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-358376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-358376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-358376: (2.532043828s)
helpers_test.go:175: Cleaning up "first-355775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-355775
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-355775: (2.028302149s)
--- PASS: TestMinikubeProfile (75.46s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-101265 --memory=3072 --mount-string /tmp/TestMountStartserial2565003006/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-101265 --memory=3072 --mount-string /tmp/TestMountStartserial2565003006/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.669731802s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-101265 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-103554 --memory=3072 --mount-string /tmp/TestMountStartserial2565003006/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-103554 --memory=3072 --mount-string /tmp/TestMountStartserial2565003006/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.060271726s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-103554 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-101265 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-101265 --alsologtostderr -v=5: (1.698574342s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-103554 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-103554
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-103554: (1.307834954s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-103554
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-103554: (6.973404549s)
--- PASS: TestMountStart/serial/RestartStopped (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-103554 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346373 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346373 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m51.26525911s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-346373 -- rollout status deployment/busybox: (3.587762212s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-4j4fs -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-8fp88 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-4j4fs -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-8fp88 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-4j4fs -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-8fp88 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-4j4fs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-4j4fs -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-8fp88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-346373 -- exec busybox-7b57f96db7-8fp88 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-346373 -v=5 --alsologtostderr
E1018 12:58:37.941327 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 12:58:46.997992 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-346373 -v=5 --alsologtostderr: (54.687340984s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.38s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-346373 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp testdata/cp-test.txt multinode-346373:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4262348206/001/cp-test_multinode-346373.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373:/home/docker/cp-test.txt multinode-346373-m02:/home/docker/cp-test_multinode-346373_multinode-346373-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m02 "sudo cat /home/docker/cp-test_multinode-346373_multinode-346373-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373:/home/docker/cp-test.txt multinode-346373-m03:/home/docker/cp-test_multinode-346373_multinode-346373-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m03 "sudo cat /home/docker/cp-test_multinode-346373_multinode-346373-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp testdata/cp-test.txt multinode-346373-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4262348206/001/cp-test_multinode-346373-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373-m02:/home/docker/cp-test.txt multinode-346373:/home/docker/cp-test_multinode-346373-m02_multinode-346373.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373 "sudo cat /home/docker/cp-test_multinode-346373-m02_multinode-346373.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373-m02:/home/docker/cp-test.txt multinode-346373-m03:/home/docker/cp-test_multinode-346373-m02_multinode-346373-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m03 "sudo cat /home/docker/cp-test_multinode-346373-m02_multinode-346373-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp testdata/cp-test.txt multinode-346373-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4262348206/001/cp-test_multinode-346373-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373-m03:/home/docker/cp-test.txt multinode-346373:/home/docker/cp-test_multinode-346373-m03_multinode-346373.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373 "sudo cat /home/docker/cp-test_multinode-346373-m03_multinode-346373.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 cp multinode-346373-m03:/home/docker/cp-test.txt multinode-346373-m02:/home/docker/cp-test_multinode-346373-m03_multinode-346373-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 ssh -n multinode-346373-m02 "sudo cat /home/docker/cp-test_multinode-346373-m03_multinode-346373-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-346373 node stop m03: (1.327861854s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346373 status: exit status 7 (529.215881ms)

                                                
                                                
-- stdout --
	multinode-346373
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346373-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346373-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr: exit status 7 (527.399783ms)

                                                
                                                
-- stdout --
	multinode-346373
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-346373-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-346373-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 12:59:02.965787 2208090 out.go:360] Setting OutFile to fd 1 ...
	I1018 12:59:02.966015 2208090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:59:02.966044 2208090 out.go:374] Setting ErrFile to fd 2...
	I1018 12:59:02.966064 2208090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 12:59:02.966393 2208090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 12:59:02.966692 2208090 out.go:368] Setting JSON to false
	I1018 12:59:02.966768 2208090 mustload.go:65] Loading cluster: multinode-346373
	I1018 12:59:02.966839 2208090 notify.go:220] Checking for updates...
	I1018 12:59:02.967798 2208090 config.go:182] Loaded profile config "multinode-346373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 12:59:02.967889 2208090 status.go:174] checking status of multinode-346373 ...
	I1018 12:59:02.968495 2208090 cli_runner.go:164] Run: docker container inspect multinode-346373 --format={{.State.Status}}
	I1018 12:59:02.986229 2208090 status.go:371] multinode-346373 host status = "Running" (err=<nil>)
	I1018 12:59:02.986256 2208090 host.go:66] Checking if "multinode-346373" exists ...
	I1018 12:59:02.986629 2208090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346373
	I1018 12:59:03.015683 2208090 host.go:66] Checking if "multinode-346373" exists ...
	I1018 12:59:03.016046 2208090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:59:03.016095 2208090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346373
	I1018 12:59:03.035941 2208090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35834 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/multinode-346373/id_rsa Username:docker}
	I1018 12:59:03.137247 2208090 ssh_runner.go:195] Run: systemctl --version
	I1018 12:59:03.144015 2208090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:59:03.156850 2208090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 12:59:03.211927 2208090 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 12:59:03.202599045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 12:59:03.212470 2208090 kubeconfig.go:125] found "multinode-346373" server: "https://192.168.67.2:8443"
	I1018 12:59:03.212514 2208090 api_server.go:166] Checking apiserver status ...
	I1018 12:59:03.212569 2208090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 12:59:03.225395 2208090 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1440/cgroup
	I1018 12:59:03.234642 2208090 api_server.go:182] apiserver freezer: "9:freezer:/docker/0077d30584fc534517623c997001a85a26ffc1466f940da8fc2ad2f853901015/kubepods/burstable/podf467fae7dbea55e4b9c0218a192c3b96/d1bbbd4e8c200345a0f21d8b4a43249cbde244910b3acf41ec21bf04612f43fd"
	I1018 12:59:03.234710 2208090 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0077d30584fc534517623c997001a85a26ffc1466f940da8fc2ad2f853901015/kubepods/burstable/podf467fae7dbea55e4b9c0218a192c3b96/d1bbbd4e8c200345a0f21d8b4a43249cbde244910b3acf41ec21bf04612f43fd/freezer.state
	I1018 12:59:03.242229 2208090 api_server.go:204] freezer state: "THAWED"
	I1018 12:59:03.242258 2208090 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 12:59:03.250803 2208090 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 12:59:03.250834 2208090 status.go:463] multinode-346373 apiserver status = Running (err=<nil>)
	I1018 12:59:03.250881 2208090 status.go:176] multinode-346373 status: &{Name:multinode-346373 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:59:03.250913 2208090 status.go:174] checking status of multinode-346373-m02 ...
	I1018 12:59:03.251300 2208090 cli_runner.go:164] Run: docker container inspect multinode-346373-m02 --format={{.State.Status}}
	I1018 12:59:03.269577 2208090 status.go:371] multinode-346373-m02 host status = "Running" (err=<nil>)
	I1018 12:59:03.269601 2208090 host.go:66] Checking if "multinode-346373-m02" exists ...
	I1018 12:59:03.269960 2208090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-346373-m02
	I1018 12:59:03.287327 2208090 host.go:66] Checking if "multinode-346373-m02" exists ...
	I1018 12:59:03.287623 2208090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 12:59:03.287662 2208090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-346373-m02
	I1018 12:59:03.304858 2208090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35839 SSHKeyPath:/home/jenkins/minikube-integration/21647-2075029/.minikube/machines/multinode-346373-m02/id_rsa Username:docker}
	I1018 12:59:03.405138 2208090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 12:59:03.418948 2208090 status.go:176] multinode-346373-m02 status: &{Name:multinode-346373-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 12:59:03.418980 2208090 status.go:174] checking status of multinode-346373-m03 ...
	I1018 12:59:03.419301 2208090 cli_runner.go:164] Run: docker container inspect multinode-346373-m03 --format={{.State.Status}}
	I1018 12:59:03.439042 2208090 status.go:371] multinode-346373-m03 host status = "Stopped" (err=<nil>)
	I1018 12:59:03.439065 2208090 status.go:384] host is not running, skipping remaining checks
	I1018 12:59:03.439072 2208090 status.go:176] multinode-346373-m03 status: &{Name:multinode-346373-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-346373 node start m03 -v=5 --alsologtostderr: (7.497306026s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346373
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-346373
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-346373: (25.066659054s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346373 --wait=true -v=5 --alsologtostderr
E1018 13:00:01.019045 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346373 --wait=true -v=5 --alsologtostderr: (53.376418888s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346373
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.57s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-346373 node delete m03: (4.919945212s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-346373 stop: (23.866116017s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346373 status: exit status 7 (100.906235ms)

                                                
                                                
-- stdout --
	multinode-346373
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346373-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr: exit status 7 (99.29955ms)

                                                
                                                
-- stdout --
	multinode-346373
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-346373-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:00:59.962043 2216907 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:00:59.962222 2216907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:00:59.962252 2216907 out.go:374] Setting ErrFile to fd 2...
	I1018 13:00:59.962271 2216907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:00:59.962556 2216907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 13:00:59.962789 2216907 out.go:368] Setting JSON to false
	I1018 13:00:59.962855 2216907 mustload.go:65] Loading cluster: multinode-346373
	I1018 13:00:59.963311 2216907 config.go:182] Loaded profile config "multinode-346373": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 13:00:59.963364 2216907 status.go:174] checking status of multinode-346373 ...
	I1018 13:00:59.963967 2216907 cli_runner.go:164] Run: docker container inspect multinode-346373 --format={{.State.Status}}
	I1018 13:00:59.962898 2216907 notify.go:220] Checking for updates...
	I1018 13:00:59.982168 2216907 status.go:371] multinode-346373 host status = "Stopped" (err=<nil>)
	I1018 13:00:59.982190 2216907 status.go:384] host is not running, skipping remaining checks
	I1018 13:00:59.982197 2216907 status.go:176] multinode-346373 status: &{Name:multinode-346373 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 13:00:59.982231 2216907 status.go:174] checking status of multinode-346373-m02 ...
	I1018 13:00:59.982560 2216907 cli_runner.go:164] Run: docker container inspect multinode-346373-m02 --format={{.State.Status}}
	I1018 13:01:00.001976 2216907 status.go:371] multinode-346373-m02 host status = "Stopped" (err=<nil>)
	I1018 13:01:00.001996 2216907 status.go:384] host is not running, skipping remaining checks
	I1018 13:01:00.002003 2216907 status.go:176] multinode-346373-m02 status: &{Name:multinode-346373-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346373 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346373 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.878982451s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-346373 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.57s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-346373
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346373-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-346373-m02 --driver=docker  --container-runtime=containerd: exit status 14 (101.897419ms)

                                                
                                                
-- stdout --
	* [multinode-346373-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-346373-m02' is duplicated with machine name 'multinode-346373-m02' in profile 'multinode-346373'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-346373-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-346373-m03 --driver=docker  --container-runtime=containerd: (29.683843287s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-346373
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-346373: exit status 80 (337.8922ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-346373 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-346373-m03 already exists in multinode-346373-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-346373-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-346373-m03: (2.399453382s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.57s)

                                                
                                    
x
+
TestPreload (118.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-448367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-448367 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (55.048601271s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-448367 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-448367 image pull gcr.io/k8s-minikube/busybox: (2.293592695s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-448367
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-448367: (5.877969875s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-448367 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1018 13:03:37.940651 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:03:46.997982 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-448367 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (53.034576613s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-448367 image list
helpers_test.go:175: Cleaning up "test-preload-448367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-448367
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-448367: (2.438431368s)
--- PASS: TestPreload (118.93s)

                                                
                                    
x
+
TestScheduledStopUnix (109.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-775356 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-775356 --memory=3072 --driver=docker  --container-runtime=containerd: (32.700202576s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-775356 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-775356 -n scheduled-stop-775356
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-775356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 13:04:57.598268 2076961 retry.go:31] will retry after 128.712µs: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.599342 2076961 retry.go:31] will retry after 147.67µs: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.600414 2076961 retry.go:31] will retry after 216.806µs: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.601540 2076961 retry.go:31] will retry after 347.142µs: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.602650 2076961 retry.go:31] will retry after 461.24µs: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.603718 2076961 retry.go:31] will retry after 638.276µs: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.604839 2076961 retry.go:31] will retry after 665.174µs: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.605914 2076961 retry.go:31] will retry after 1.632905ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.608103 2076961 retry.go:31] will retry after 3.514886ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.612317 2076961 retry.go:31] will retry after 5.230968ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.618637 2076961 retry.go:31] will retry after 3.743424ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.622863 2076961 retry.go:31] will retry after 5.882689ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.629055 2076961 retry.go:31] will retry after 7.120916ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.637269 2076961 retry.go:31] will retry after 27.333984ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
I1018 13:04:57.665532 2076961 retry.go:31] will retry after 27.465136ms: open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/scheduled-stop-775356/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-775356 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-775356 -n scheduled-stop-775356
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-775356
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-775356 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-775356
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-775356: exit status 7 (78.929089ms)

                                                
                                                
-- stdout --
	scheduled-stop-775356
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-775356 -n scheduled-stop-775356
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-775356 -n scheduled-stop-775356: exit status 7 (71.293831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-775356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-775356
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-775356: (4.932529906s)
--- PASS: TestScheduledStopUnix (109.18s)

                                                
                                    
x
+
TestInsufficientStorage (13.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-124870 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-124870 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (11.092783158s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c2cd7ebf-6fa1-4f57-865e-826c79d3fda1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-124870] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e098174-1927-4888-9243-021e5cff5230","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21647"}}
	{"specversion":"1.0","id":"52537ffd-23a8-460a-a661-5c42d5d5d2f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"59b5b7cd-0896-4439-af9b-800a297fe030","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig"}}
	{"specversion":"1.0","id":"389784cb-1e0b-4302-8b32-5ddcb0921478","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube"}}
	{"specversion":"1.0","id":"68fd62de-5d42-4765-acc9-bddff179759a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"18231a29-300d-4412-a4e2-9d5f3c4c87f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fbc64ced-ddb0-4a5e-92aa-bc3ddc84a309","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5b108578-7305-48fb-8dc2-4a145b9913b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8a395b21-a50b-48e7-8527-897a7e472cc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"81f20094-d246-401e-8508-64c740575a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c4dce7d8-7e7f-468e-8b81-7b4b05faffc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-124870\" primary control-plane node in \"insufficient-storage-124870\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"74ddb76c-ff85-4146-8f09-65d4d76acd90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b77b14bb-b9a5-4406-af88-766d4da6c76d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dedf617b-c62c-4d59-9c62-e5fc38858464","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-124870 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-124870 --output=json --layout=cluster: exit status 7 (303.044674ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-124870","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-124870","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 13:06:24.934836 2235621 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-124870" does not appear in /home/jenkins/minikube-integration/21647-2075029/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-124870 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-124870 --output=json --layout=cluster: exit status 7 (311.25668ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-124870","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-124870","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 13:06:25.247869 2235688 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-124870" does not appear in /home/jenkins/minikube-integration/21647-2075029/kubeconfig
	E1018 13:06:25.257704 2235688 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/insufficient-storage-124870/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-124870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-124870
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-124870: (1.955622691s)
--- PASS: TestInsufficientStorage (13.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (73.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1469321376 start -p running-upgrade-468531 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1469321376 start -p running-upgrade-468531 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (42.017469663s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-468531 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-468531 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.502846282s)
helpers_test.go:175: Cleaning up "running-upgrade-468531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-468531
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-468531: (1.991628382s)
--- PASS: TestRunningBinaryUpgrade (73.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (101.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1018 13:08:30.069341 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.784239771s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-252520
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-252520: (1.345884238s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-252520 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-252520 status --format={{.Host}}: exit status 7 (88.787904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1018 13:08:37.941666 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.150765691s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-252520 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (131.251577ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-252520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-252520
	    minikube start -p kubernetes-upgrade-252520 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2525202 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-252520 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-252520 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.097920834s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-252520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-252520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-252520: (2.435297978s)
--- PASS: TestKubernetesUpgrade (101.18s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.57817815 start -p missing-upgrade-766148 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.57817815 start -p missing-upgrade-766148 --memory=3072 --driver=docker  --container-runtime=containerd: (1m1.432507508s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-766148
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-766148
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-766148 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-766148 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m13.63724549s)
helpers_test.go:175: Cleaning up "missing-upgrade-766148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-766148
E1018 13:08:46.997989 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-766148: (5.30322671s)
--- PASS: TestMissingContainerUpgrade (142.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-958364 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-958364 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (97.170289ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-958364] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-958364 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-958364 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.80090794s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-958364 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-958364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-958364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.765671875s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-958364 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-958364 status -o json: exit status 2 (315.826168ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-958364","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-958364
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-958364: (1.982688184s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-958364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-958364 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.05181967s)
--- PASS: TestNoKubernetes/serial/Start (8.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-958364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-958364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (287.769057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-958364
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-958364: (1.341833205s)
--- PASS: TestNoKubernetes/serial/Stop (1.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-958364 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-958364 --driver=docker  --container-runtime=containerd: (6.919258208s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-958364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-958364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (297.252949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3971021515 start -p stopped-upgrade-527311 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3971021515 start -p stopped-upgrade-527311 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (40.80657901s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3971021515 -p stopped-upgrade-527311 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3971021515 -p stopped-upgrade-527311 stop: (1.67479088s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-527311 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-527311 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.843569239s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-527311
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-527311: (2.250509983s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.25s)

                                                
                                    
x
+
TestPause/serial/Start (86.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-896213 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-896213 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m26.298450083s)
--- PASS: TestPause/serial/Start (86.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-599078 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-599078 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (233.929181ms)

                                                
                                                
-- stdout --
	* [false-599078] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 13:11:28.617668 2268636 out.go:360] Setting OutFile to fd 1 ...
	I1018 13:11:28.617801 2268636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:11:28.617813 2268636 out.go:374] Setting ErrFile to fd 2...
	I1018 13:11:28.617817 2268636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 13:11:28.618112 2268636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21647-2075029/.minikube/bin
	I1018 13:11:28.618591 2268636 out.go:368] Setting JSON to false
	I1018 13:11:28.619624 2268636 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":53636,"bootTime":1760739453,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 13:11:28.619693 2268636 start.go:141] virtualization:  
	I1018 13:11:28.623395 2268636 out.go:179] * [false-599078] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 13:11:28.627508 2268636 out.go:179]   - MINIKUBE_LOCATION=21647
	I1018 13:11:28.627573 2268636 notify.go:220] Checking for updates...
	I1018 13:11:28.637383 2268636 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 13:11:28.640304 2268636 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21647-2075029/kubeconfig
	I1018 13:11:28.643105 2268636 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21647-2075029/.minikube
	I1018 13:11:28.646058 2268636 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 13:11:28.648974 2268636 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 13:11:28.652350 2268636 config.go:182] Loaded profile config "pause-896213": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1018 13:11:28.652482 2268636 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 13:11:28.684805 2268636 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 13:11:28.684965 2268636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 13:11:28.779467 2268636 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 13:11:28.764325316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 13:11:28.779573 2268636 docker.go:318] overlay module found
	I1018 13:11:28.782730 2268636 out.go:179] * Using the docker driver based on user configuration
	I1018 13:11:28.785682 2268636 start.go:305] selected driver: docker
	I1018 13:11:28.785703 2268636 start.go:925] validating driver "docker" against <nil>
	I1018 13:11:28.785717 2268636 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 13:11:28.789285 2268636 out.go:203] 
	W1018 13:11:28.791591 2268636 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1018 13:11:28.794980 2268636 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-599078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-599078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 13:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-896213
contexts:
- context:
cluster: pause-896213
extensions:
- extension:
last-update: Sat, 18 Oct 2025 13:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-896213
name: pause-896213
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-896213
user:
client-certificate: /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/pause-896213/client.crt
client-key: /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/pause-896213/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-599078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-599078"

                                                
                                                
----------------------- debugLogs end: false-599078 [took: 4.525363165s] --------------------------------
helpers_test.go:175: Cleaning up "false-599078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-599078
--- PASS: TestNetworkPlugins/group/false (4.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.11s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-896213 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-896213 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.097992277s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-896213 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.84s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-896213 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-896213 --output=json --layout=cluster: exit status 2 (425.061003ms)

                                                
                                                
-- stdout --
	{"Name":"pause-896213","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-896213","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-896213 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-896213 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-896213 --alsologtostderr -v=5: (1.139174589s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.36s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-896213 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-896213 --alsologtostderr -v=5: (5.362194134s)
--- PASS: TestPause/serial/DeletePaused (5.36s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.09s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.015635744s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-896213
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-896213: exit status 1 (24.213667ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-896213: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-671538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1018 13:13:37.941237 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:13:46.997781 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-671538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m2.248135148s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-671538 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8e3750bc-3d6c-4c38-8800-ca58bee90ffa] Pending
helpers_test.go:352: "busybox" [8e3750bc-3d6c-4c38-8800-ca58bee90ffa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8e3750bc-3d6c-4c38-8800-ca58bee90ffa] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003201234s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-671538 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-671538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-671538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040650305s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-671538 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-671538 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-671538 --alsologtostderr -v=3: (12.06417939s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-671538 -n old-k8s-version-671538
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-671538 -n old-k8s-version-671538: exit status 7 (73.482609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-671538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-671538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-671538 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (49.823612574s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-671538 -n old-k8s-version-671538
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8mjt6" [827a31db-7683-45b4-885e-4662ffd8c438] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004428249s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8mjt6" [827a31db-7683-45b4-885e-4662ffd8c438] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003368214s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-671538 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-671538 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-671538 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-671538 --alsologtostderr -v=1: (1.081799053s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-671538 -n old-k8s-version-671538
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-671538 -n old-k8s-version-671538: exit status 2 (425.140609ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-671538 -n old-k8s-version-671538
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-671538 -n old-k8s-version-671538: exit status 2 (429.444895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-671538 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-671538 -n old-k8s-version-671538
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-671538 -n old-k8s-version-671538
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-278248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-278248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m1.694971686s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-896178 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-896178 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m16.256710609s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-278248 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f5a35af7-0111-4e40-8f46-22bd9bba0290] Pending
helpers_test.go:352: "busybox" [f5a35af7-0111-4e40-8f46-22bd9bba0290] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f5a35af7-0111-4e40-8f46-22bd9bba0290] Running
E1018 13:16:41.020922 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003274907s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-278248 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-278248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-278248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053673947s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-278248 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-278248 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-278248 --alsologtostderr -v=3: (12.140838585s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-896178 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [10cc61f2-4811-44c9-8175-16fed6d0f360] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [10cc61f2-4811-44c9-8175-16fed6d0f360] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003935898s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-896178 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-278248 -n embed-certs-278248
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-278248 -n embed-certs-278248: exit status 7 (71.276902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-278248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-278248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-278248 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (52.273284506s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-278248 -n embed-certs-278248
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-896178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-896178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.445511818s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-896178 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-896178 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-896178 --alsologtostderr -v=3: (12.882497729s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-896178 -n no-preload-896178
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-896178 -n no-preload-896178: exit status 7 (79.937497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-896178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-896178 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-896178 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.984626346s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-896178 -n no-preload-896178
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-84pgn" [b6b4b6c4-784e-485b-9756-7527b93fe963] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00265148s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-84pgn" [b6b4b6c4-784e-485b-9756-7527b93fe963] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003147934s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-278248 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-278248 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-278248 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-278248 -n embed-certs-278248
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-278248 -n embed-certs-278248: exit status 2 (347.110535ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-278248 -n embed-certs-278248
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-278248 -n embed-certs-278248: exit status 2 (324.647467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-278248 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-278248 -n embed-certs-278248
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-278248 -n embed-certs-278248
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-591298 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-591298 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m28.938075366s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dm8wl" [fa287bf5-7716-40ff-b302-6739ef58d81b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003718244s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dm8wl" [fa287bf5-7716-40ff-b302-6739ef58d81b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004143477s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-896178 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-896178 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-896178 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-896178 -n no-preload-896178
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-896178 -n no-preload-896178: exit status 2 (394.044082ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-896178 -n no-preload-896178
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-896178 -n no-preload-896178: exit status 2 (403.384451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-896178 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-896178 -n no-preload-896178
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-896178 -n no-preload-896178
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-684599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1018 13:18:37.941277 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:18:46.998005 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:02.620034 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:02.626366 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:02.637645 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:02.658893 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:02.700243 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:02.781602 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:02.943062 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:03.264422 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:03.906357 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:05.188365 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:07.749675 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:19:12.871608 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-684599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.687199139s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-684599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-684599 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-684599 --alsologtostderr -v=3: (1.331842204s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684599 -n newest-cni-684599
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684599 -n newest-cni-684599: exit status 7 (70.734369ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-684599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-684599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1018 13:19:23.113631 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-684599 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (14.712595433s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-684599 -n newest-cni-684599
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-684599 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-684599 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684599 -n newest-cni-684599
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684599 -n newest-cni-684599: exit status 2 (413.125838ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684599 -n newest-cni-684599
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684599 -n newest-cni-684599: exit status 2 (434.453507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-684599 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-684599 -n newest-cni-684599
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-684599 -n newest-cni-684599
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-591298 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [114c7a71-30e2-4c27-83c0-80b99f2586fc] Pending
helpers_test.go:352: "busybox" [114c7a71-30e2-4c27-83c0-80b99f2586fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [114c7a71-30e2-4c27-83c0-80b99f2586fc] Running
E1018 13:19:43.595432 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003623413s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-591298 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (58.918319771s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-591298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-591298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.67071149s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-591298 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-591298 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-591298 --alsologtostderr -v=3: (12.307987421s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298: exit status 7 (111.510898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-591298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-591298 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1018 13:20:24.556737 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-591298 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.976442919s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-599078 "pgrep -a kubelet"
I1018 13:20:39.678920 2076961 config.go:182] Loaded profile config "auto-599078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-599078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xn2dp" [0df00200-58f7-4168-a083-34c9f23d9777] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xn2dp" [0df00200-58f7-4168-a083-34c9f23d9777] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003760299s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-599078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9gvp" [5cd81166-e47f-4cf0-bd47-3e7ebdb7e680] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003052975s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-k9gvp" [5cd81166-e47f-4cf0-bd47-3e7ebdb7e680] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00445361s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-591298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-591298 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-591298 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298: exit status 2 (436.850097ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298: exit status 2 (471.475917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-591298 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-591298 -n default-k8s-diff-port-591298
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m28.609411603s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1018 13:21:46.478086 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:57.757824 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:57.764140 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:57.775483 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:57.796851 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:57.838162 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:57.919572 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:58.081322 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:58.402898 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:21:59.045010 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:22:00.326289 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:22:02.887992 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:22:08.009795 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.132223695s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-n9s9v" [655a8c01-3a31-4caf-9d4c-76f9ee43751b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1018 13:22:18.251430 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/no-preload-896178/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.002948339s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-599078 "pgrep -a kubelet"
I1018 13:22:22.686330 2076961 config.go:182] Loaded profile config "calico-599078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-599078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f5xzx" [e7845f33-ffa0-4cbf-ab66-699a47b7729e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f5xzx" [e7845f33-ffa0-4cbf-ab66-699a47b7729e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003083848s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-599078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-748kf" [d40badc5-eef4-4de6-9f80-e9cd3029dcef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003087921s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-599078 "pgrep -a kubelet"
I1018 13:22:47.241390 2076961 config.go:182] Loaded profile config "kindnet-599078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-599078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ttntz" [454651b1-faab-4d26-84a3-aed74d9948b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ttntz" [454651b1-faab-4d26-84a3-aed74d9948b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004342088s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-599078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m11.364978615s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1018 13:23:37.941268 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/functional-955523/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:23:46.997563 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:24:02.620042 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/old-k8s-version-671538/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (51.035092819s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-599078 "pgrep -a kubelet"
I1018 13:24:09.256943 2076961 config.go:182] Loaded profile config "custom-flannel-599078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-599078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mtrkx" [40de1158-2202-44be-8e84-c384b3a15918] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mtrkx" [40de1158-2202-44be-8e84-c384b3a15918] Running
I1018 13:24:13.786711 2076961 config.go:182] Loaded profile config "enable-default-cni-599078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003988645s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-599078 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-599078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-44499" [12a4cf5b-5ce1-420c-80e5-10785e0fc160] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-44499" [12a4cf5b-5ce1-420c-80e5-10785e0fc160] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003577419s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-599078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-599078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.212484131s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (84.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1018 13:24:48.889406 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/default-k8s-diff-port-591298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:24:59.131601 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/default-k8s-diff-port-591298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:10.071366 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/addons-897172/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:19.613032 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/default-k8s-diff-port-591298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:39.937668 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:39.944157 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:39.955589 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:39.977054 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:40.018526 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:40.100348 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:40.261921 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:40.583209 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:41.225819 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:42.507937 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:25:45.069439 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-599078 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m24.43797164s)
--- PASS: TestNetworkPlugins/group/bridge/Start (84.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zmnmn" [69bdfa92-f5ca-4300-ba98-f33b8735dbfa] Running
E1018 13:25:50.191222 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003450976s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-599078 "pgrep -a kubelet"
I1018 13:25:55.850119 2076961 config.go:182] Loaded profile config "flannel-599078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-599078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-679d4" [70212015-ab30-4fd4-9658-fbd3284e9069] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-679d4" [70212015-ab30-4fd4-9658-fbd3284e9069] Running
E1018 13:26:00.432610 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 13:26:00.574743 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/default-k8s-diff-port-591298/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003297065s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-599078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-599078 "pgrep -a kubelet"
I1018 13:26:13.114471 2076961 config.go:182] Loaded profile config "bridge-599078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-599078 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-z7hrw" [995ac626-50c7-4f38-9d4f-f9237cd9377b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-z7hrw" [995ac626-50c7-4f38-9d4f-f9237cd9377b] Running
E1018 13:26:20.914319 2076961 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/auto-599078/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003316729s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-599078 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-599078 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (30/331)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-697075 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-697075" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-697075
--- SKIP: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-970572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-970572
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-599078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-599078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 13:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-896213
contexts:
- context:
cluster: pause-896213
extensions:
- extension:
last-update: Sat, 18 Oct 2025 13:10:44 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-896213
name: pause-896213
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-896213
user:
client-certificate: /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/pause-896213/client.crt
client-key: /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/pause-896213/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-599078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-599078"

                                                
                                                
----------------------- debugLogs end: kubenet-599078 [took: 3.716452607s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-599078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-599078
--- SKIP: TestNetworkPlugins/group/kubenet (3.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-599078 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-599078" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21647-2075029/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 13:11:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-896213
contexts:
- context:
cluster: pause-896213
extensions:
- extension:
last-update: Sat, 18 Oct 2025 13:11:33 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-896213
name: pause-896213
current-context: pause-896213
kind: Config
preferences: {}
users:
- name: pause-896213
user:
client-certificate: /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/pause-896213/client.crt
client-key: /home/jenkins/minikube-integration/21647-2075029/.minikube/profiles/pause-896213/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-599078

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-599078" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-599078"

                                                
                                                
----------------------- debugLogs end: cilium-599078 [took: 4.824423335s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-599078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-599078
--- SKIP: TestNetworkPlugins/group/cilium (5.05s)

                                                
                                    
Copied to clipboard